diff --git a/custom-dictionary.txt b/custom-dictionary.txt index 3ccf201a08..7911f890ff 100644 --- a/custom-dictionary.txt +++ b/custom-dictionary.txt @@ -62,3 +62,7 @@ hcledit self-hosting infrachanges Entra +GLMU +myprodsa +azuread +mysa diff --git a/docs/2.0/docs/accountfactory/architecture/security-controls.md b/docs/2.0/docs/accountfactory/architecture/security-controls.md index d41576a336..6e584f5c45 100644 --- a/docs/2.0/docs/accountfactory/architecture/security-controls.md +++ b/docs/2.0/docs/accountfactory/architecture/security-controls.md @@ -79,7 +79,7 @@ Requires the following tokens be created: - `INFRA_ROOT_WRITE_TOKEN`: Fine-grained PAT with read/write access to infrastructure repositories - `ORG_REPO_ADMIN_TOKEN`: Fine-grained PAT with admin access for repository management -See [Setup via Machine Users](/2.0/docs/pipelines/installation/viamachineusers.md) for more details. +See [Setup via Machine Users](/2.0/docs/pipelines/installation/viamachineusers) for more details. diff --git a/docs/2.0/docs/pipelines/installation/addingnewrepo.md b/docs/2.0/docs/accountfactory/installation/addingnewrepo.md similarity index 86% rename from docs/2.0/docs/pipelines/installation/addingnewrepo.md rename to docs/2.0/docs/accountfactory/installation/addingnewrepo.md index 9001969540..bb248b70cf 100644 --- a/docs/2.0/docs/pipelines/installation/addingnewrepo.md +++ b/docs/2.0/docs/accountfactory/installation/addingnewrepo.md @@ -1,6 +1,6 @@ -# Initial Setup +# Adding Account Factory to a new repository -To configure Gruntwork Pipelines in a new GitHub repository, complete the following steps: +To configure Gruntwork Account Factory in a new GitHub repository, the following steps are required (and will be explained in detail below): 1. Create your `infrastructure-live-root` repository using Gruntwork's GitHub template. 2. Configure the Gruntwork.io GitHub App to authorize your `infrastructure-live-root` repository, or ensure that the appropriate machine user tokens are set up as repository or organization secrets. @@ -23,7 +23,7 @@ Navigate to the template repository and select **Use this template** -> **Create Use the Gruntwork.io GitHub App to [add the repository as an Infra Root repository](/2.0/docs/pipelines/installation/viagithubapp#configuration). -If using the [machine user model](/2.0/docs/pipelines/installation/viamachineusers.md), ensure the `INFRA_ROOT_WRITE_TOKEN` (and `ORG_REPO_ADMIN_TOKEN` for enterprise customers) is added to the repository as a secret or configured as an organization secret. +If using the [machine user model](/2.0/docs/pipelines/installation/viamachineusers), ensure the `INFRA_ROOT_WRITE_TOKEN` (and `ORG_REPO_ADMIN_TOKEN` for enterprise customers) is added to the repository as a secret or configured as an organization secret. ## Updating the Bootstrap Workflow @@ -47,5 +47,5 @@ Each of your repositories will contain a Bootstrap Pull Request. Follow the inst :::info -The bootstrapping pull requests include pre-configured files, such as a `mise.toml` file that specifies versions of OpenTofu and Terragrunt. Ensure you review and update these configurations to align with your organization's requirements. +The bootstrapping pull requests include pre-configured files, such as a `.mise.toml` file that specifies versions of OpenTofu and Terragrunt. Ensure you review and update these configurations to align with your organization's requirements. ::: diff --git a/docs/2.0/docs/accountfactory/installation/index.md b/docs/2.0/docs/accountfactory/installation/index.md index 11a240e3b7..2d71cba52c 100644 --- a/docs/2.0/docs/accountfactory/installation/index.md +++ b/docs/2.0/docs/accountfactory/installation/index.md @@ -2,16 +2,16 @@ ## Overview -Account Factory is automatically integrated into [new Pipelines root repositories](/2.0/docs/pipelines/installation/addingnewrepo) during the bootstrapping process. +Account Factory is automatically integrated into [new Pipelines root repositories](/2.0/docs/accountfactory/installation/addingnewrepo) during the bootstrapping process. By default, Account Factory includes the following components: - 📋 An HTML form for generating workflow inputs: `.github/workflows/account-factory-inputs.html` - + - 🏭 A workflow for generating new requests: `.github/workflows/account-factory.yml` - + - 🗃️ A root directory for tracking account requests: `_new-account-requests` - + - ⚙️ A YAML file for tracking account names and IDs: `accounts.yml` For detailed instructions on using these components, refer to the [Vending a New AWS Account Guide](/2.0/docs/accountfactory/guides/vend-aws-account). @@ -19,6 +19,3 @@ For detailed instructions on using these components, refer to the [Vending a New ## Configuring account factory Account Factory is fully operational for vending new accounts without requiring any configuration changes. However, a [comprehensive reference for all configuration options is available here](/2.0/reference/accountfactory/configurations), allowing you to customize values and templates for generating Infrastructure as Code (IaC) for new accounts. - - - diff --git a/docs/2.0/docs/pipelines/installation/prerequisites/awslandingzone.md b/docs/2.0/docs/accountfactory/prerequisites/awslandingzone.md similarity index 97% rename from docs/2.0/docs/pipelines/installation/prerequisites/awslandingzone.md rename to docs/2.0/docs/accountfactory/prerequisites/awslandingzone.md index 397d5ba301..073482cd19 100644 --- a/docs/2.0/docs/pipelines/installation/prerequisites/awslandingzone.md +++ b/docs/2.0/docs/accountfactory/prerequisites/awslandingzone.md @@ -1,11 +1,10 @@ import CustomizableValue from '/src/components/CustomizableValue'; - # Landing Zone ## Overview -The Landing Zone component establishes an initial best-practice AWS multi-account setup. +The Landing Zone component establishes an initial best-practice AWS multi-account setup for use with Gruntwork Account Factory. ## Extending AWS Control Tower @@ -242,16 +241,15 @@ Complete the following steps to prepare for Gruntwork Account Factory: 3. Switch to the `Users` tab, select your management user from the list and click **Next** - 4. Select `AWSAdministratorAccess` from the list of Permission Sets, then click **Next** + 4. Select `AWSAdministratorAccess` from the list of Permission Sets, then click **Next** - 5. Click `Submit` to finish assigning access to your user + 5. Click `Submit` to finish assigning access to your user ## Next steps Now that Control Tower is configured, consider these next steps: + - [Set up IAM Identity Center](https://docs.aws.amazon.com/singlesignon/latest/userguide/get-started-choose-identity-source.html) for access control. - [Apply required controls or SCPs](https://docs.aws.amazon.com/controltower/latest/userguide/controls.html). - [Install Gruntwork Pipelines](/2.0/docs/pipelines/installation/viagithubapp). - [Set up Gruntwork Account Factory](/2.0/docs/accountfactory/installation). - - diff --git a/docs/2.0/docs/overview/getting-started/index.md b/docs/2.0/docs/overview/getting-started/index.mdx similarity index 91% rename from docs/2.0/docs/overview/getting-started/index.md rename to docs/2.0/docs/overview/getting-started/index.mdx index 4f3d5dca36..9d5c306b74 100644 --- a/docs/2.0/docs/overview/getting-started/index.md +++ b/docs/2.0/docs/overview/getting-started/index.mdx @@ -1,13 +1,14 @@ -# Setting up DevOps Foundations & Components import PersistentCheckbox from '/src/components/PersistentCheckbox'; +# Setting up DevOps Foundations & Components + ### Step 1: [Activate your Gruntwork account](/2.0/docs/overview/getting-started/create-account) Create your Gruntwork account and invite your team members to access Gruntwork resources. -### Step 2: [Set up a Landing Zone](/2.0/docs/pipelines/installation/prerequisites/awslandingzone) +### Step 2: [Set up a Landing Zone](/2.0/docs/accountfactory/prerequisites/awslandingzone) Follow Gruntwork's AWS Landing Zone walkthrough to implement a best-practice multi-account setup, ready for use with DevOps Foundations. @@ -22,7 +23,7 @@ Set up authentication for Pipelines to enable secure automation of infrastructur ### Step 4: Create new Pipelines repositories - [New GitHub repository](/2.0/docs/pipelines/installation/addingnewrepo) -- [New GitLab repository](/2.0/docs/pipelines/installation/addingnewgitlabrepo) +- [New GitLab repository](/2.0/docs/pipelines/installation/addinggitlabrepo) Alternatively, you can add Pipelines to an existing repository: @@ -40,7 +41,8 @@ During the Pipelines setup process, configure Gruntwork Account Factory for AWS ### Step 6: Start using DevOps Foundations You're all set! You can now: + - [Build with the Gruntwork IaC Library](/2.0/docs/library/tutorials/deploying-your-first-gruntwork-module) -- Automatically [plan and apply IaC changes with Pipelines](/2.0/docs/pipelines/guides/running-plan-apply) +- [Automatically plan and apply IaC changes with Pipelines](/2.0/docs/pipelines/guides/running-plan-apply) - [Vend new AWS accounts with Account Factory](/2.0/docs/accountfactory/guides/vend-aws-account) - [Keep your infrastructure up to date with Patcher](/2.0/docs/patcher/concepts/) diff --git a/docs/2.0/docs/pipelines/architecture/execution-flow.md b/docs/2.0/docs/pipelines/architecture/execution-flow.md index b7cbbbc47b..dbaa9f1326 100644 --- a/docs/2.0/docs/pipelines/architecture/execution-flow.md +++ b/docs/2.0/docs/pipelines/architecture/execution-flow.md @@ -10,6 +10,6 @@ The orchestrator analyzes each infrastructure change in a pull request or git co ## Executor -The executor receives as inputs a pipeline action (e.g. `terragrunt plan`) and a specific unit of infrastructure that has been changed (e.g. `/path/to/changed-unit/terragrunt.hcl`) and executes the specified action on the specified unit. +The executor receives as inputs a pipeline action (e.g. `terragrunt plan`) and a specific unit of infrastructure that has been changed (e.g. `/path/to/changed-unit/terragrunt.hcl`) and executes the specified action on the specified unit. For example, when responding to a `ModuleUpdated` event for `/some/unit/terragrunt.hcl`, the executor might execute a `terragrunt apply` on `/some/unit/terragrunt.hcl`. Or when responding to `AccountsAdded` events on merge, the executor may create a follow-up pull request in the `infrastructure-live-root` repository to include additional IaC code for baselining the newly added accounts. diff --git a/docs/2.0/docs/pipelines/architecture/index.md b/docs/2.0/docs/pipelines/architecture/index.md index e75a7905b7..da3951ad39 100644 --- a/docs/2.0/docs/pipelines/architecture/index.md +++ b/docs/2.0/docs/pipelines/architecture/index.md @@ -8,7 +8,7 @@ Outside of the main binary, Pipelines has several other components that work tog By design, customers run the binary as part of their CI/CD pipelines (e.g. GitHub Actions, GitLab CI, etc.). As such, Gruntwork provides out-of-the-box CI/CD configurations for supported platforms when customers sign up for Gruntwork Pipelines. -We likewise provide CI/CD configurations for [Gruntwork Account Factory](https://docs.gruntwork.io/account-factory/overview). +We likewise provide CI/CD configurations for [Gruntwork Account Factory](https://docs.gruntwork.io/account-factory/overview). When using Gruntwork Pipelines without Gruntwork Account Factory, customers are responsible for configuring their repositories to use the appropriate CI/CD configuration for that platform (see [Adding Pipelines to an Existing Repository](/2.0/docs/pipelines/installation/addingexistingrepo) for more information). This code is typically fairly minimal, and the majority of the work is done by reusable workflows made available by Gruntwork, and the binary itself. diff --git a/docs/2.0/docs/pipelines/architecture/security-controls.md b/docs/2.0/docs/pipelines/architecture/security-controls.md index 6b88e281b8..1c70edec13 100644 --- a/docs/2.0/docs/pipelines/architecture/security-controls.md +++ b/docs/2.0/docs/pipelines/architecture/security-controls.md @@ -47,7 +47,7 @@ Requires that the following tokens are created: - `INFRA_ROOT_WRITE_TOKEN`: Fine-grained PAT with read/write access to infrastructure repositories - `ORG_REPO_ADMIN_TOKEN`: Fine-grained PAT with admin access for repository management -See [Setup via Machine Users](/2.0/docs/pipelines/installation/viamachineusers.md) for more details. +See [Setup via Machine Users](/2.0/docs/pipelines/installation/viamachineusers) for more details. diff --git a/docs/2.0/docs/pipelines/concepts/cloud-auth/index.md b/docs/2.0/docs/pipelines/concepts/cloud-auth/index.md index b016790dc5..aa91b06be3 100644 --- a/docs/2.0/docs/pipelines/concepts/cloud-auth/index.md +++ b/docs/2.0/docs/pipelines/concepts/cloud-auth/index.md @@ -17,9 +17,9 @@ Cloud authentication in Pipelines is built on the principle of least privilege a Currently, Pipelines supports authentication to the following cloud providers: -- [AWS](./aws.mdx) - AWS authentication using OIDC -- [Azure](./azure.md) - Azure authentication using OIDC -- [Custom](./custom.md) - Custom authentication you can implement yourself +- [AWS](/2.0/docs/pipelines/concepts/cloud-auth/aws) - AWS authentication using OIDC +- [Azure](/2.0/docs/pipelines/concepts/cloud-auth/azure) - Azure authentication using OIDC +- [Custom](/2.0/docs/pipelines/concepts/cloud-auth/custom) - Custom authentication you can implement yourself ## Security Best Practices diff --git a/docs/2.0/docs/pipelines/configuration/driftdetection.md b/docs/2.0/docs/pipelines/configuration/driftdetection.md index f80f52b702..043ccb414b 100644 --- a/docs/2.0/docs/pipelines/configuration/driftdetection.md +++ b/docs/2.0/docs/pipelines/configuration/driftdetection.md @@ -2,4 +2,4 @@ If you are a Pipelines Enterprise customer using GitHub or GitLab and used the infrastructure-live-root repository template to install Pipelines, Drift Detection is already included and available as a workflow in your repository. -For installations not based on the template, follow the [Installing Drift Detection Guide](/2.0/docs/pipelines/guides/installing-drift-detection.md) to enable Drift Detection. +For standalone installations that did not use the `infrastructure-live-root` repository template, follow the [Installing Drift Detection Guide](/2.0/docs/pipelines/guides/installing-drift-detection.md) to enable Drift Detection. diff --git a/docs/2.0/docs/pipelines/configuration/settings.md b/docs/2.0/docs/pipelines/configuration/settings.md index 6f6627980f..9671ed755d 100644 --- a/docs/2.0/docs/pipelines/configuration/settings.md +++ b/docs/2.0/docs/pipelines/configuration/settings.md @@ -1,11 +1,9 @@ # Pipelines Configuration -[Full Pipelines Configuration Reference](/docs/2.0/reference/pipelines/configurations.md) - import PipelinesConfig from '/docs/2.0/reference/pipelines/language_auth_partial.mdx' -## Terraform & OpenTofu +## OpenTofu & Terraform -You can specify whether to invoke Terraform or OpenTofu in your Pipeline by configuring the [tf-binary](/2.0/reference/pipelines/configurations#tf-binary) setting. Define the versions of `tf-binary` and Terragrunt in the [mise.toml](/2.0/reference/pipelines/configurations#example-mise-configuration) file within your repository. +You can specify whether to invoke OpenTofu or Terraform with Pipelines by configuring the [tf-binary](/2.0/reference/pipelines/configurations#tf-binary) setting. Define the versions of Terragrunt and OpenTofu/Terraform used by Pipelines in the [mise.toml](/2.0/reference/pipelines/configurations#example-mise-configuration) file within your repository. diff --git a/docs/2.0/docs/pipelines/guides/setup-delegated-repo.mdx b/docs/2.0/docs/pipelines/guides/setup-delegated-repo.mdx index e68dd8fc26..1f25ee7a27 100644 --- a/docs/2.0/docs/pipelines/guides/setup-delegated-repo.mdx +++ b/docs/2.0/docs/pipelines/guides/setup-delegated-repo.mdx @@ -19,7 +19,7 @@ Delegating infrastructure management might be necessary for reasons such as: For example, a repository with application code may need to build and push a container image to AWS ECR before deploying it to a Kubernetes cluster. -The following guide assumes you have completed the [Pipelines Setup & Installation](/2.0/docs/pipelines/installation/prerequisites/awslandingzone.md). +The following guide assumes you have completed the [Pipelines Setup & Installation](/2.0/docs/accountfactory/prerequisites/awslandingzone). ## Step 1 - Verify the delegated account setup diff --git a/docs/2.0/docs/pipelines/installation/addingexistinggitlabrepo.mdx b/docs/2.0/docs/pipelines/installation/addingexistinggitlabrepo.mdx new file mode 100644 index 0000000000..5ac66ca6dd --- /dev/null +++ b/docs/2.0/docs/pipelines/installation/addingexistinggitlabrepo.mdx @@ -0,0 +1,891 @@ +# Bootstrap Pipelines in an Existing GitLab Project + +import Tabs from '@theme/Tabs'; +import TabItem from '@theme/TabItem'; +import PersistentCheckbox from '/src/components/PersistentCheckbox'; +import CustomizableValue from '/src/components/CustomizableValue'; + +This guide provides comprehensive instructions for integrating [Gruntwork Pipelines](https://gruntwork.io/products/pipelines/) into an existing GitLab project with Infrastructure as Code (IaC). This is designed for Gruntwork customers who want to add Pipelines to their current infrastructure projects for streamlined CI/CD management. + +To configure Gruntwork Pipelines in an existing GitLab project, complete the following steps (which are explained in detail below): + +1. **(If using a self-hosted GitLab instance) Ensure OIDC configuration and JWKS are publicly accessible.** +2. **Plan your Pipelines setup** by identifying all environments and cloud accounts/subscriptions you need to manage. +3. **Bootstrap core infrastructure** in accounts/subscriptions that don't already have the required OIDC and state management resources. +4. **Configure SCM access** using [machine users](/2.0/docs/pipelines/installation/viamachineusers) with appropriate Personal Access Tokens (PATs). +5. **Create `.gruntwork` HCL configurations** to tell Pipelines how to authenticate and organize your environments. +6. **Create `.gitlab-ci.yml`** to configure your GitLab CI/CD pipeline. +7. **Commit and push** your changes to activate Pipelines. + +## Ensure OIDC configuration and JWKS are publicly accessible + +This step only applies if you are using a self-hosted GitLab instance that is not accessible from the public internet. If you are using GitLab.com or a self-hosted instance that is publicly accessible, you can skip this step. + +1. [Follow GitLab's instructions](https://docs.gitlab.com/ci/cloud_services/aws/#configure-a-non-public-gitlab-instance) for hosting your OIDC configuration and JWKS in a public location (e.g. S3 Bucket). This is necessary for both Gruntwork and the AWS OIDC provider to access the GitLab OIDC configuration and JWKS when authenticating JWT's generated by your custom instance. +2. Note the (stored as `ci_id_tokens_issuer_url` in your `gitlab.rb` file per GitLab's instructions) generated above for reuse in the next steps. + +:::note Progress Checklist + + + +::: + +## Prerequisites + +Before starting, ensure you have: + +- **An active Gruntwork subscription** with Pipelines access. Verify by checking the [Gruntwork Developer Portal](https://app.gruntwork.io/account) and confirming access to "pipelines" repositories in your GitHub team. +- **Cloud provider credentials** with permissions to create OIDC providers and IAM roles in accounts where Pipelines will manage infrastructure. +- **Git installed** locally for cloning and managing your project. +- **Existing IaC project** with Terragrunt configurations you want to manage with Pipelines (if you are using OpenTofu/Terraform, and want to start using Terragrunt, read the [Quickstart Guide](https://terragrunt.gruntwork.io/docs/getting-started/quick-start)). + +## Planning Your Pipelines Setup + +Before implementing Pipelines, it's crucial to plan your setup by identifying all the environments and cloud resources you need to manage. + +### Identify Your Environments + +Review your existing project structure and identify: + +1. **All environments** you want to manage with Pipelines (e.g., `dev`, `staging`, `prod`) +2. **Cloud accounts/subscriptions** associated with each environment +3. **Directory paths** in your project that contain Terragrunt units for each environment +4. **Existing OIDC resources** that may already be provisioned in your accounts + +:::note Progress Checklist + + + + + + +::: + +### Determine Required OIDC Roles + +For each AWS Account / Azure Subscription you want to manage, you might already have some or all of the following resources provisioned. + + + + +**Required AWS Resources:** + +- An OIDC provider for GitLab +- An IAM role for Pipelines to assume when running Terragrunt plan commands +- An IAM role for Pipelines to assume when running Terragrunt apply commands + + + + +**Required Azure Resources:** + +- Entra ID Application for plans with Federated Identity Credential +- Entra ID Application for applies with Federated Identity Credential +- Service Principals with appropriate role assignments +- Storage Account and Container for Terragrunt state storage (if not already existing) + + + + +:::note Progress Checklist + + + + +::: + +## Configuring SCM Access + +Pipelines needs the ability to interact with GitLab to fetch resources (e.g. IaC code, reusable CI/CD code and the Pipelines binary itself). + +To create machine users for GitLab access, follow our [machine users guide](/2.0/docs/pipelines/installation/viamachineusers) to set up the appropriate Personal Access Tokens (PATs) with the required permissions. + +:::note Progress Checklist + + + +::: + +## Bootstrapping Cloud Infrastructure + +If your AWS accounts / Azure subscriptions don't already have all the required OIDC and state management resources, you'll need to bootstrap them. This section provides the infrastructure code needed to set up these resources. + +:::tip + +If you already have all the resources listed, you can skip this section. + +If you have some of them provisioned, but not all, you can decide to either destroy the resources you already have provisioned and recreate them or import them into state. If you are not sure, please contact [Gruntwork support](/support). + +::: + +### Prepare Your Project + +Clone your project to your local machine using [Git](https://docs.gitlab.com/user/project/repository/index.html#clone-a-repository) if you haven't already. + +:::tip + +If you don't have Git installed, you can install it by following the official guide for [Git installation](https://git-scm.com/downloads). + +::: + +For example: + +```bash +git clone git@gitlab.com:acme/infrastructure-live.git +cd infrastructure-live +``` + +:::note Progress Checklist + + + + +::: + +To bootstrap your project, we'll use Boilerplate to scaffold it with the necessary IaC code to provision the infrastructure necessary for Pipelines to function. + +The easiest way to install Boilerplate is to use `mise` to install it. + +:::tip + +If you don't have `mise` installed, you can install it by following the official guide for [mise installation](https://mise.jdx.dev/getting-started.html). + +::: + +```bash +mise use -g boilerplate@latest +``` + +:::tip + +If you'd rather install a specific version of Boilerplate, you can use the `ls-remote` command to list the available versions. + +```bash +mise ls-remote boilerplate +``` + +::: + +:::note Progress Checklist + + + +::: + +If you don't already have Terragrunt and OpenTofu installed locally, you can install them using `mise`: + +```bash +mise use -g terragrunt@latest opentofu@latest +``` + +:::note Progress Checklist + + + +::: + +### Cloud-specific bootstrap instructions + + + + +The resources you need provisioned in AWS to start managing resources with Pipelines are: + +1. An OpenID Connect (OIDC) provider +2. An IAM role for Pipelines to assume when running Terragrunt plan commands +3. An IAM role for Pipelines to assume when running Terragrunt apply commands + +For every account you want Pipelines to manage infrastructure in. + +:::tip Don't Panic! + +This may seem like a lot to set up, but the content you need to add to your project is minimal. The majority of the work will be pulled from a reusable catalog that you'll reference in your project. + +If you want to peruse the catalog that's used in the bootstrap process, you can take a look at the [terragrunt-scale-catalog](https://github.com/gruntwork-io/terragrunt-scale-catalog) repository. + +::: + +The process that we'll follow to get these resources ready for Pipelines is: + +1. Use Boilerplate to scaffold bootstrap configurations in your project for each AWS account +2. Use Terragrunt to provision these resources in your AWS accounts +3. (Optionally) Bootstrap additional AWS accounts until all your AWS accounts are ready for Pipelines + +{/* We're using an h3 tag here instead of a markdown heading to avoid adding content to the ToC that won't work when switching between tabs */} +

Bootstrap Your Project for AWS

+ +First, confirm that you have a `root.hcl` file in the root of your project that looks something like this: + +```hcl title="root.hcl" +locals { + account_hcl = read_terragrunt_config(find_in_parent_folders("account.hcl")) + state_bucket_name = local.account_hcl.locals.state_bucket_name + + region_hcl = read_terragrunt_config(find_in_parent_folders("region.hcl")) + aws_region = local.region_hcl.locals.aws_region +} + +remote_state { + backend = "s3" + generate = { + path = "backend.tf" + if_exists = "overwrite" + } + config = { + bucket = local.state_bucket_name + region = local.aws_region + key = "${path_relative_to_include()}/tofu.tfstate" + encrypt = true + use_lockfile = true + } +} + +generate "provider" { + path = "provider.tf" + if_exists = "overwrite_terragrunt" + contents = < + +::: + +{/* We're using an h3 tag here instead of a markdown heading to avoid adding content to the ToC that won't work when switching between tabs */} +

Provision AWS Bootstrap Resources

+ +Once you've scaffolded out the accounts you want to bootstrap, you can use Terragrunt to provision the resources in each of these accounts. + +:::tip + +Make sure that you authenticate to each AWS account you are bootstrapping using AWS credentials for that account before you attempt to provision resources in it. + +You can follow the documentation [here](https://search.opentofu.org/provider/hashicorp/aws/latest#authentication-and-configuration) to authenticate with the AWS provider. You are advised to choose an authentication method that doesn't require any hard-coded credentials, like assuming an IAM role. + +::: + +For each account you want to bootstrap, you'll need to run the following commands: + +First, make sure that everything is set up correctly by running a plan in the `bootstrap` directory in `name-of-account/_global` where `name-of-account` is the name of the AWS account you want to bootstrap. + +```bash title="name-of-account/_global/bootstrap" +terragrunt run --all --non-interactive --provider-cache plan +``` + +:::tip + +We're using the `--provider-cache` flag here to ensure that we don't re-download the AWS provider on every run by leveraging the [Terragrunt Provider Cache Server](https://terragrunt.gruntwork.io/docs/features/provider-cache-server/). + +::: + +Next, apply the changes to your account. + +```bash title="name-of-account/_global/bootstrap" +terragrunt run --all --non-interactive --provider-cache apply +``` + +:::note Progress Checklist + + + + +::: + +
+ + +The resources you need provisioned in Azure to start managing resources with Pipelines are: + +1. An Azure Resource Group for OpenTofu state resources + 1. An Azure Storage Account in that resource group for OpenTofu state storage + 1. An Azure Storage Container in that storage account for OpenTofu state storage +2. An Entra ID Application to use for plans + 1. A Flexible Federated Identity Credential for the application to authenticate with your project on any branch + 2. A Service Principal for the application to be used in role assignments + 1. A role assignment for the service principal to access the Azure subscription + 2. A role assignment for the service principal to access the Azure Storage Account +3. An Entra ID Application to use for applies + 1. A Federated Identity Credential for the application to authenticate with your project on the deploy branch + 2. A Service Principal for the application to be used in role assignments + 1. A role assignment for the service principal to access the Azure subscription + +:::tip Don't Panic! + +This may seem like a lot to set up, but the content you need to add to your project is minimal. The majority of the work will be pulled from a reusable catalog that you'll reference in your project. + +If you want to peruse the catalog that's used in the bootstrap process, you can take a look at the [terragrunt-scale-catalog](https://github.com/gruntwork-io/terragrunt-scale-catalog) repository. + +::: + +The process that we'll follow to get these resources ready for Pipelines is: + +1. Use Boilerplate to scaffold bootstrap configurations in your project for each Azure subscription +2. Use Terragrunt to provision these resources in your Azure subscription +3. Finalizing Terragrunt configurations using the bootstrap resources we just provisioned +4. Pull the bootstrap resources into state, now that we have configured a remote state backend +5. (Optionally) Bootstrap additional Azure subscriptions until all your Azure subscriptions are ready for Pipelines + +{/* We're using an h3 tag here instead of a markdown heading to avoid adding content to the ToC that won't work when switching between tabs */} +

Bootstrap Your Project for Azure

+ +For each Azure subscription that needs bootstrapping, we'll use Boilerplate to scaffold the necessary content. Run this command from the root of your project for each subscription: + +```bash +boilerplate \ + --template-url 'github.com/gruntwork-io/terragrunt-scale-catalog//templates/boilerplate/azure/gitlab/subscription?ref=v1.0.0' \ + --output-folder . +``` + +:::tip + +You'll need to run this boilerplate command once for each Azure subscription you want to manage with Pipelines. Boilerplate will prompt you for subscription-specific values each time. + +::: + +:::tip + +You can reply `y` to all the prompts to include dependencies, and accept defaults unless you want to customize something. + +Alternatively, you could run Boilerplate non-interactively by passing the `--non-interactive` flag. You'll need to supply the relevant values for required variables in that case. + +e.g. + +```bash +boilerplate \ + --template-url 'github.com/gruntwork-io/terragrunt-scale-catalog//templates/boilerplate/azure/gitlab/subscription?ref=v1.0.0' \ + --output-folder . \ + --var 'AccountName=dev' \ + --var 'GitLabGroupName=acme' \ + --var 'GitLabRepoName=infrastructure-live' \ + --var 'GitLabInstanceURL=https://gitlab.com' \ + --var 'SubscriptionName=dev' \ + --var 'AzureTenantID=00000000-0000-0000-0000-000000000000' \ + --var 'AzureSubscriptionID=11111111-1111-1111-1111-111111111111' \ + --var 'AzureLocation=East US' \ + --var 'StateResourceGroupName=pipelines-rg' \ + --var 'StateStorageAccountName=mysa' \ + --var 'StateStorageContainerName=tfstate' \ + --non-interactive +``` + +You can also choose to store these values in a YAML file and pass it to Boilerplate using the `--var-file` flag. + +```yaml title="vars.yml" +AccountName: dev +GitLabGroupName: acme +GitLabRepoName: infrastructure-live +GitLabInstanceURL: https://gitlab.com +SubscriptionName: dev +AzureTenantID: 00000000-0000-0000-0000-000000000000 +AzureSubscriptionID: 11111111-1111-1111-1111-111111111111 +AzureLocation: East US +StateResourceGroupName: pipelines-rg +StateStorageAccountName: my-storage-account +StateStorageContainerName: tfstate +``` + +```bash +boilerplate \ + --template-url 'github.com/gruntwork-io/terragrunt-scale-catalog//templates/boilerplate/azure/gitlab/subscription?ref=v1.0.0' \ + --output-folder . \ + --var-file vars.yml \ + --non-interactive +``` + +::: + +:::note Progress Checklist + + + +::: + +{/* We're using an h3 tag here instead of a markdown heading to avoid adding content to the ToC that won't work when switching between tabs */} +

Provision Azure Bootstrap Resources

+ +Once you've scaffolded out the subscriptions you want to bootstrap, you can use Terragrunt to provision the resources in your Azure subscription. + +If you haven't already, you'll want to authenticate to Azure using the `az` CLI. + +```bash +az login +``` + +:::note Progress Checklist + + + +::: + + +To dynamically configure the Azure provider with a given tenant ID and subscription ID, ensure that you are exporting the following environment variables if you haven't the values via the `az` CLI: + +- `ARM_TENANT_ID` +- `ARM_SUBSCRIPTION_ID` + +For example: + +```bash +export ARM_TENANT_ID="00000000-0000-0000-0000-000000000000" +export ARM_SUBSCRIPTION_ID="11111111-1111-1111-1111-111111111111" +``` + +:::note Progress Checklist + + + +::: + +First, make sure that everything is set up correctly by running a plan in the subscription directory. + +```bash title="name-of-subscription" +terragrunt run --all --non-interactive --provider-cache plan +``` + +:::tip + +We're using the `--provider-cache` flag here to ensure that we don't re-download the Azure provider on every run to speed up the process by leveraging the [Terragrunt Provider Cache Server](https://terragrunt.gruntwork.io/docs/features/provider-cache-server/). + +::: + +:::note Progress Checklist + + + +::: + +Next, apply the changes to your subscription. + +```bash title="name-of-subscription" +terragrunt run --all --non-interactive --provider-cache --no-stack-generate apply +``` + +:::tip + +We're adding the `--no-stack-generate` flag here, as Terragrunt will already have the requisite stack configurations generated, and we don't want to accidentally overwrite any configurations while we have state stored locally before we pull them into remote state. + +::: + +:::note Progress Checklist + + +::: + +{/* We're using an h3 tag here instead of a markdown heading to avoid adding content to the ToC that won't work when switching between tabs */} +

Finalizing Terragrunt configurations

+ +Once you've provisioned the resources in your Azure subscription, you can finalize the Terragrunt configurations using the bootstrap resources we just provisioned. + +First, edit the `root.hcl` file in the root of your project to leverage the storage account we just provisioned. + +If your `root.hcl` file doesn't already have a remote state backend configuration, you'll need to add one that looks like this: + +```hcl title="root.hcl" +locals { + sub_hcl = read_terragrunt_config(find_in_parent_folders("sub.hcl")) + + state_resource_group_name = local.sub_hcl.locals.state_resource_group_name + state_storage_account_name = local.sub_hcl.locals.state_storage_account_name + state_storage_container_name = local.sub_hcl.locals.state_storage_container_name +} + +remote_state { + backend = "azurerm" + generate = { + path = "backend.tf" + if_exists = "overwrite" + } + config = { + resource_group_name = local.state_resource_group_name + storage_account_name = local.state_storage_account_name + container_name = local.state_storage_container_name + key = "${path_relative_to_include()}/tofu.tfstate" + } +} + +generate "provider" { + path = "provider.tf" + if_exists = "overwrite_terragrunt" + contents = < + +::: + +Next, finalize the `.gruntwork/environment-.hcl` file in the root of your project to reference the IDs for the applications we just provisioned. + +You can find the values for the `plan_client_id` and `apply_client_id` by running `terragrunt stack output` in the `bootstrap` directory in `name-of-subscription/bootstrap`. + +```bash +terragrunt stack output +``` + +The relevant bits that you want to extract from the stack output are the following: + +```hcl +bootstrap = { + apply_app = { + client_id = "33333333-3333-3333-3333-333333333333" + } + plan_app = { + client_id = "44444444-4444-4444-4444-444444444444" + } +} +``` + +You can use those values to set the values for `plan_client_id` and `apply_client_id` in the `.gruntwork/environment-.hcl` file. + +:::note Progress Checklist + + + + +::: + +{/* We're using an h3 tag here instead of a markdown heading to avoid adding content to the ToC that won't work when switching between tabs */} +

Pulling the resources into state

+ +Once you've provisioned the resources in your Azure subscription, you can pull the resources into state using the storage account we just provisioned. + +```bash title="name-of-subscription" +terragrunt run --all --non-interactive --provider-cache --no-stack-generate -- init -migrate-state -force-copy +``` + +:::tip + +We're adding the `-force-copy` flag here to avoid any issues with OpenTofu waiting for an interactive prompt to copy up local state. + +::: + +:::note Progress Checklist + + + +::: + +
+
+ +## Creating `.gruntwork` HCL Configurations + +Create [HCL configurations](/2.0/reference/pipelines/configurations-as-code/) in the `.gruntwork` directory in the root of your project to tell Pipelines how you plan to organize your infrastructure, and how you plan to have Pipelines authenticate with your cloud provider(s). + +### The `repository` block + +The core configuration that you'll want to start with is the `repository` block. This block tells Pipelines which branch has the "live" infrastructure you want provisioned. When you merge IaC to this branch, Pipelines will be triggered to update your infrastructure accordingly. + +```hcl title=".gruntwork/repository.hcl" +repository { + deploy_branch_name = "main" +} +``` + +:::note Progress Checklist + + + + +::: + +### The `environment` block + +Next, you'll want to define the environments you want to manage with Pipelines using the [`environment` block](/2.0/reference/pipelines/configurations-as-code/api#environment-block). + +For each environment, you'll want to define a [`filter` block](/2.0/reference/pipelines/configurations-as-code/api#filter-block) that tells Pipelines which units are part of that environment. You'll also want to define an [`authentication` block](/2.0/reference/pipelines/configurations-as-code/api#authentication-block) that tells Pipelines how to authenticate with your cloud provider(s) for that environment. + + + + +```hcl title=".gruntwork/environment-production.hcl" +environment "production" { + filter { + paths = ["prod/*"] + } + + authentication { + aws_oidc { + account_id = "123456789012" + plan_iam_role_arn = "arn:aws:iam::123456789012:role/pipelines-plan" + apply_iam_role_arn = "arn:aws:iam::123456789012:role/pipelines-apply" + } + } +} +``` + +:::tip + +Learn more about how Pipelines authenticates to AWS in the [Authenticating to AWS](/2.0/docs/pipelines/concepts/cloud-auth/aws) page. + +::: + +:::tip + +Check out the [aws block](/2.0/reference/pipelines/configurations-as-code/#aws-blocks) for more information on how to configure Pipelines to reuse common AWS configurations. + +::: + +:::note Progress Checklist + + + + + + + +::: + + + + +```hcl title=".gruntwork/environment-production.hcl" +environment "production" { + filter { + paths = ["prod/*"] + } + + authentication { + azure_oidc { + tenant_id = "00000000-0000-0000-0000-000000000000" + subscription_id = "11111111-1111-1111-1111-111111111111" + + plan_client_id = "33333333-3333-3333-3333-333333333333" + apply_client_id = "44444444-4444-4444-4444-444444444444" + } + } +} +``` + +:::tip + +Learn more about how Pipelines authenticates to Azure in the [Authenticating to Azure](/2.0/docs/pipelines/concepts/cloud-auth/azure) page. + +::: + +:::note Progress Checklist + + + + + + + + +::: + + + + +```hcl title=".gruntwork/environment-production.hcl" +environment "production" { + filter { + paths = ["prod/*"] + } + + authentication { + custom { + auth_provider_cmd = "./scripts/custom-auth-prod.sh" + } + } +} +``` + +:::tip + +Learn more about how Pipelines can authenticate with custom authentication in the [Custom Authentication](/2.0/docs/pipelines/concepts/cloud-auth/custom) page. + +::: + +:::note Progress Checklist + + + + + + + +::: + + + + +## Creating `.gitlab-ci.yml` + +Create a `.gitlab-ci.yml` file in the root of your project with the following content: + +```yaml title=".gitlab-ci.yml" +include: + - project: 'gruntwork-io/gitlab-pipelines-workflows' + file: '/workflows/pipelines.yml' + ref: 'v1' +``` + +:::tip + +You can read the [Pipelines GitLab CI Workflow](https://gitlab.com/gruntwork-io/gitlab-pipelines-workflows) to learn how this GitLab CI pipeline calls the Pipelines CLI to run your pipelines. + +::: + +:::note Progress Checklist + + + +::: + +## Commit and Push Your Changes + +Commit and push your changes to your project. + +:::note + +You should include `[skip ci]` in your commit message here to prevent triggering the Pipelines workflow before everything is properly configured. + +::: + +```bash +git add . +git commit -m "Add Pipelines configurations and GitLab CI workflow [skip ci]" +git push +``` + +:::note Progress Checklist + + + + +::: + +🚀 You've successfully added Gruntwork Pipelines to your existing GitLab project! + +## Next Steps + +You have successfully completed the installation of Gruntwork Pipelines in an existing GitLab project. Proceed to [Deploying your first infrastructure change](/2.0/docs/pipelines/tutorials/deploying-your-first-infrastructure-change.md) to begin deploying changes. + +## Troubleshooting Tips + +If you encounter issues during the setup process, here are some common troubleshooting steps: + +### Bootstrap Resources Failure + +If your bootstrap resource provisioning fails: + + + + + + + +### HCL Configuration Issues + +If your HCL configurations aren't working as expected: + + + + + +### GitLab CI Pipeline Issues + +If your GitLab CI pipeline isn't working as expected: + + + + + + + + + diff --git a/docs/2.0/docs/pipelines/installation/addingexistingrepo.md b/docs/2.0/docs/pipelines/installation/addingexistingrepo.md deleted file mode 100644 index 9149fbd80c..0000000000 --- a/docs/2.0/docs/pipelines/installation/addingexistingrepo.md +++ /dev/null @@ -1,553 +0,0 @@ -import CustomizableValue from '/src/components/CustomizableValue'; - -# Adding Gruntwork Pipelines to an existing repository - -This guide provides instructions for installing Gruntwork Pipelines in a repository with existing IaC. This guide is for Gruntwork customers looking to integrate Pipelines into their existing repositories for streamlined infrastructure management. - -:::info - -This process leverages a new configuration paradigm for Pipelines called ["Pipelines Configuration as Code"](/2.0/reference/pipelines/configurations-as-code), introduced in July 2024. This system allows developers to use Gruntwork Pipelines with any folder structure in their IaC repositories. Previously, Pipelines required a specific folder layout to map source control directories to AWS Accounts for authentication. - -**As of Q4 2024, this new configuration system does not yet support the [Gruntwork Account Factory](https://docs.gruntwork.io/2.0/docs/accountfactory/concepts/).** If you need both Pipelines and the Account Factory, we recommend [starting with a new repository](/2.0/docs/pipelines/installation/addingnewrepo) or contacting [Gruntwork support](/support) for assistance. -::: - -## Prerequisites - -- **Active Gruntwork subscription**: Ensure your account includes access to Pipelines. Verify access by navigating to the "View team in GitHub" option in the [Gruntwork Developer Portal's account page](https://app.gruntwork.io/account) if you are an admin. From the GitHub team UI, search for "pipelines" under the repositories tab to confirm access. -- **AWS credentials**: You need credentials with permissions to create resources in the AWS account where Pipelines will be deployed. This includes creating an OpenID Connect (OIDC) Provider and AWS Identity and Access Management (IAM) roles for Pipelines to use when deploying infrastructure. - -## Setting up the repository - -### Account information - -Create an `accounts.yml` file in the root directory of your repository with the following content. Replace , , and with the appropriate values for the account you are deploying to. Add additional accounts as needed to manage them with Pipelines. - - ```yaml title="accounts.yml" - # required: Name of an account - $$AWS_ACCOUNT_NAME$$: - # required: The AWS account ID - id: "$$AWS_ACCOUNT_ID$$" - # required: The email address of the account owner - email: "$$AWS_ACCOUNT_EMAIL$$" - ``` - -### Pipelines configurations - -Create a file named `.gruntwork/gruntwork.hcl` in the root directory of your repository with the following content. This file is used to configure Pipelines for your repository. Update the specified placeholders with the appropriate values: - -- : Specify a name that represents the environment being deployed, such as `production`, `staging`, or `development`. -- : Define the root-relative path of the folder in your repository that contains the terragrunt units for the environment you are deploying to. This may be the same as the environment name if there is a directory in the root of the repository that contains all the terragrunt units for the environment. -- : Enter the AWS Account ID associated with the deployment of Terragrunt units for the specified environment. -- : Specify the branch name used for deployments, such as `main` or `master`. This branch will trigger the Pipelines apply workflow when changes are merged. Pull requests targeting this branch will trigger the Pipelines plan workflow. - - -```hcl title=".gruntwork/gruntwork.hcl" -# Configurations applicable to the entire repository https://docs.gruntwork.io/2.0/docs/pipelines/installation/addingexistingrepo#repository-blocks -repository { - deploy_branch_name = "$$DEPLOY_BRANCH_NAME$$" -} - -aws { - accounts "all" { - // Reading the accounts.yml file from the root of the repository - path = "../accounts.yml" - } -} - -# Configurations that are applicable to a specific environment within a repository # https://docs.gruntwork.io/2.0/docs/pipelines/installation/addingexistingrepo#environment-blocks -environment "$$ENVIRONMENT_NAME$$" { - filter { - paths = ["$$PATH_TO_ENVIRONMENT$$/*"] - } - - authentication { - aws_oidc { - account_id = aws.accounts.all.$$AWS_ACCOUNT_NAME$$.id - plan_iam_role_arn = "arn:aws:iam::${aws.accounts.all.$$AWS_ACCOUNT_NAME$$.id}:role/pipelines-plan" - apply_iam_role_arn = "arn:aws:iam::${aws.accounts.all.$$AWS_ACCOUNT_NAME$$.id}:role/pipelines-apply" - } - } -} -``` - -The IAM roles mentioned in the unit configuration above will be created in the [Pipelines OpenID Connect (OIDC) Provider and Roles](#pipelines-openid-connectoidc-provider-and-roles) section. - -For additional environments, you can add new [environment configurations](/2.0/reference/pipelines/configurations-as-code#environment-configurations). Alternatively, consider using [unit configuration](/2.0/reference/pipelines/configurations-as-code#unit-configurations) for Terragrunt units in your repository that do not align with an environment configuration. - -### Pipelines GitHub Actions (GHA) workflow - -Pipelines is implemented using a GitHub [reusable workflow](https://docs.github.com/en/actions/sharing-automations/reusing-workflows#creating-a-reusable-workflow). The actual code for Pipelines and its features resides in an external repository, typically [Gruntwork's Pipelines Workflows repository](https://github.com/gruntwork-io/pipelines-workflows/). Your repository references this external workflow rather than containing the implementation itself. - -Create a file named `.github/workflows/pipelines.yml` in the root of your repository with the following content: - -
-Pipelines GHA workflow file - -```yaml title=".github/workflows/pipelines.yml" -###################################################################################################################### -# INFRASTRUCTURE CI/CD CONFIGURATION -# -# This file configures GitHub Actions to implement a CI/CD pipeline for managing infrastructure code. -# -# The pipeline defined in this configuration includes the following steps: -# -# - For any commit on any branch, identify all Terragrunt modules that have changed between the `HEAD` of the branch and -# `main`, and run `terragrunt plan` on each of those modules. -# - For commits to `main`, execute `terragrunt apply` on each of the updated modules. -# -###################################################################################################################### - -name: Pipelines -run-name: "[GWP]: ${{ github.event.commits[0].message || github.event.pull_request.title || 'No commit message' }}" -on: - push: - branches: - - $$DEPLOY_BRANCH_NAME$$ - paths-ignore: - # Workflow does not run only if ALL filepaths match the pattern. See https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions#example-excluding-paths - - ".github/**" - pull_request: - types: - - opened - - synchronize - - reopened - -# Permissions to assume roles and create pull requests -permissions: - id-token: write - -jobs: - GruntworkPipelines: - # https://github.com/gruntwork-io/pipelines-workflows/blob/v3/.github/workflows/pipelines.yml - uses: gruntwork-io/pipelines-workflows/.github/workflows/pipelines.yml@v3 - secrets: - PIPELINES_READ_TOKEN: ${{ secrets.PIPELINES_READ_TOKEN }} - - PipelinesPassed: - needs: GruntworkPipelines - if: always() - runs-on: ubuntu-latest - steps: - - run: | - echo "::debug::RESULT: $RESULT" - if [[ $RESULT = "success" ]]; then - echo "GruntworkPipelines completed successfully!" - else - echo "GruntworkPipelines failed!" - exit 1 - fi - env: - RESULT: ${{ needs.GruntworkPipelines.result }} -``` - -
- -### Pipelines OpenID Connect (OIDC) provider and roles - -This step involves creating the Infrastructure as Code (IaC) configuration for the [OIDC](https://docs.github.com/en/actions/security-for-github-actions/security-hardening-your-deployments/configuring-openid-connect-in-amazon-web-services) roles required by Pipelines to deploy infrastructure. - -Two roles are needed: -- `pipelines-plan` for plans -- `pipelines-apply` for applies - -Using two distinct roles upholds the principle of least privilege. The `pipelines-plan` role is used during pull request creation or updates and requires primarily read-only permissions. The `pipelines-apply` role, used during pull request merges, requires read/write permissions. Additionally, these roles have different IAM trust policies. The `apply` role only trusts the deploy branch, while the `plan` role trusts all branches. - -This step requires AWS credentials with sufficient permissions to create the necessary IAM resources that Pipelines will assume when deploying infrastructure. - -#### Create the Terragrunt units - -Within the ** directory, create the Terragrunt unit files as described below, updating the following values as needed: - -- : Specify the state bucket name or pattern of the state bucket(s) to be used for the environment. The Pipeline roles must have permissions to access the state bucket for storing and retrieving state files. -- : Specify the name of the DynamoDB table used for state locking. -- : Provide the exact name of the repository where Pipelines is being configured. - -
-OIDC Provider - -```hcl title="$$PATH_TO_ENVIRONMENT$$/_global/github-actions-openid-connect-provider/terragrunt.hcl" -terraform { - source = "git@github.com:gruntwork-io/terraform-aws-security.git//modules/github-actions-openid-connect-provider?ref=v0.74.5" -} - -# Include the root `terragrunt.hcl` configuration, which has settings common across all environments & components. -include "root" { - path = find_in_parent_folders() -} - -inputs = { - allowed_organizations = [ - "$$GITHUB_ORG_NAME$$", - ] -} -``` - -
- -
-Pipelines Plan - -```hcl title="$$PATH_TO_ENVIRONMENT$$/_global/pipelines-plan-role/terragrunt.hcl" -terraform { - source = "git@github.com:gruntwork-io/terraform-aws-security.git//modules/github-actions-iam-role?ref=v0.74.5" -} - -# Include the root `terragrunt.hcl` configuration, which has settings common across all environments & components. -include "root" { - path = find_in_parent_folders() -} - -# The OIDC IAM roles for GitHub Actions require an IAM OpenID Connect (OIDC) Provider to be provisioned for each account. -# The underlying module used in `envcommon` is capable of creating the OIDC provider. Since multiple OIDC roles are required, -# a dedicated module is used, and all roles depend on its output -dependency "github-actions-openid-connect-provider" { - config_path = "../github-actions-openid-connect-provider" - - # Configure mock outputs for the `validate` command that are returned when there are no outputs available (e.g the - # module hasn't been applied yet. - mock_outputs_allowed_terraform_commands = ["validate", "plan"] - mock_outputs_merge_strategy_with_state = "shallow" - mock_outputs = { - arn = "known_after_apply" - url = "token.actions.githubusercontent.com" - } -} - -locals { - state_bucket_pattern = lower("$$AWS_STATE_BUCKET_PATTERN$$") -} - -inputs = { - github_actions_openid_connect_provider_arn = dependency.github-actions-openid-connect-provider.outputs.arn - github_actions_openid_connect_provider_url = dependency.github-actions-openid-connect-provider.outputs.url - - allowed_sources_condition_operator = "StringLike" - - allowed_sources = { - "$$GITHUB_ORG_NAME$$/$$INFRASTRUCTURE_LIVE_REPO_NAME$$" : ["*"] - } - - custom_iam_policy_name = "pipelines-plan-oidc-policy" - iam_role_name = "pipelines-plan" - - # Policy based on these docs: - # https://terragrunt.gruntwork.io/docs/features/aws-auth/#aws-iam-policies - iam_policy = { - # State permissions - "DynamoDBLocksTableAccess" = { - effect = "Allow" - actions = [ - "dynamodb:PutItem", - "dynamodb:GetItem", - "dynamodb:DescribeTable", - "dynamodb:DeleteItem", - "dynamodb:CreateTable", - ] - resources = ["arn:aws:dynamodb:*:*:table/$$AWS_DYNAMO_DB_TABLE$$"] - } - "S3StateBucketAccess" = { - effect = "Allow" - actions = [ - "s3:ListBucket", - "s3:GetBucketVersioning", - "s3:GetBucketAcl", - "s3:GetBucketLogging", - "s3:CreateBucket", - "s3:PutBucketPublicAccessBlock", - "s3:PutBucketTagging", - "s3:PutBucketPolicy", - "s3:PutBucketVersioning", - "s3:PutEncryptionConfiguration", - "s3:PutBucketAcl", - "s3:PutBucketLogging", - "s3:GetEncryptionConfiguration", - "s3:GetBucketPolicy", - "s3:GetBucketPublicAccessBlock", - "s3:PutLifecycleConfiguration", - "s3:PutBucketOwnershipControls", - ] - resources = [ - "arn:aws:s3:::${local.state_bucket_pattern}", - ] - } - "S3StateBucketObjectAccess" = { - effect = "Allow" - actions = [ - "s3:PutObject", - "s3:GetObject" - ] - resources = [ - "arn:aws:s3:::${local.state_bucket_pattern}/*", - ] - } - } -} -``` - -
- -
-Pipelines Apply - - - -```hcl title="$$PATH_TO_ENVIRONMENT$$/_global/pipelines-apply-role/terragrunt.hcl" -terraform { - source = "git@github.com:gruntwork-io/terraform-aws-security.git//modules/github-actions-iam-role?ref=v0.74.5" -} - -# Include the root `terragrunt.hcl` configuration, which has settings common across all environments & components. -include "root" { - path = find_in_parent_folders() -} - -# The OIDC IAM roles for GitHub Actions require an IAM OpenID Connect (OIDC) Provider to be provisioned for each account. -# The underlying module used in `envcommon` is capable of creating the OIDC provider. Since multiple OIDC roles are required, -# a dedicated module is used, and all roles depend on its output. -dependency "github-actions-openid-connect-provider" { - config_path = "../github-actions-openid-connect-provider" - - # Configure mock outputs for the `validate` command that are returned when there are no outputs available (e.g the - # module hasn't been applied yet. - mock_outputs_allowed_terraform_commands = ["validate", "plan"] - mock_outputs_merge_strategy_with_state = "shallow" - mock_outputs = { - arn = "known_after_apply" - url = "token.actions.githubusercontent.com" - } -} - -locals { - # Automatically load account-level variables - state_bucket_pattern = lower("$$AWS_STATE_BUCKET_PATTERN$$") -} - -inputs = { - github_actions_openid_connect_provider_arn = dependency.github-actions-openid-connect-provider.outputs.arn - github_actions_openid_connect_provider_url = dependency.github-actions-openid-connect-provider.outputs.url - - allowed_sources = { - "$$GITHUB_ORG_NAME$$/$$INFRASTRUCTURE_LIVE_REPO_NAME$$" : ["$$DEPLOY_BRANCH_NAME$$"] - } - - # Policy for OIDC role assumed from GitHub in the "$$GITHUB_ORG_NAME$$/$$INFRASTRUCTURE_LIVE_REPO_NAME$$" repo - custom_iam_policy_name = "pipelines-apply-oidc-policy" - iam_role_name = "pipelines-apply" - - # Policy based on these docs: - # https://terragrunt.gruntwork.io/docs/features/aws-auth/#aws-iam-policies - iam_policy = { - "IamPassRole" = { - resources = ["*"] - actions = ["iam:*"] - effect = "Allow" - } - "IamCreateRole" = { - resources = [ - "arn:aws:iam::*:role/aws-service-role/orgsdatasync.servicecatalog.amazonaws.com/AWSServiceRoleForServiceCatalogOrgsDataSync" - ] - actions = ["iam:CreateServiceLinkedRole"] - effect = "Allow" - } - "S3BucketAccess" = { - resources = ["*"] - actions = ["s3:*"] - effect = "Allow" - } - "DynamoDBLocksTableAccess" = { - resources = ["arn:aws:dynamodb:*:*:table/terraform-locks"] - actions = ["dynamodb:*"] - effect = "Allow" - } - "OrganizationsDeployAccess" = { - resources = ["*"] - actions = ["organizations:*"] - effect = "Allow" - } - "ControlTowerDeployAccess" = { - resources = ["*"] - actions = ["controltower:*"] - effect = "Allow" - } - "IdentityCenterDeployAccess" = { - resources = ["*"] - actions = ["sso:*", "ds:*", "sso-directory:*"] - effect = "Allow" - } - "ECSDeployAccess" = { - resources = ["*"] - actions = ["ecs:*"] - effect = "Allow" - } - "ACMDeployAccess" = { - resources = ["*"] - actions = ["acm:*"] - effect = "Allow" - } - "AutoScalingDeployAccess" = { - resources = ["*"] - actions = ["autoscaling:*"] - effect = "Allow" - } - "CloudTrailDeployAccess" = { - resources = ["*"] - actions = ["cloudtrail:*"] - effect = "Allow" - } - "CloudWatchDeployAccess" = { - resources = ["*"] - actions = ["cloudwatch:*", "logs:*"] - effect = "Allow" - } - "CloudFrontDeployAccess" = { - resources = ["*"] - actions = ["cloudfront:*"] - effect = "Allow" - } - "ConfigDeployAccess" = { - resources = ["*"] - actions = ["config:*"] - effect = "Allow" - } - "EC2DeployAccess" = { - resources = ["*"] - actions = ["ec2:*"] - effect = "Allow" - } - "ECRDeployAccess" = { - resources = ["*"] - actions = ["ecr:*"] - effect = "Allow" - } - "ELBDeployAccess" = { - resources = ["*"] - actions = ["elasticloadbalancing:*"] - effect = "Allow" - } - "GuardDutyDeployAccess" = { - resources = ["*"] - actions = ["guardduty:*"] - effect = "Allow" - } - "IAMDeployAccess" = { - resources = ["*"] - actions = ["iam:*", "access-analyzer:*"] - effect = "Allow" - } - "KMSDeployAccess" = { - resources = ["*"] - actions = ["kms:*"] - effect = "Allow" - } - "LambdaDeployAccess" = { - resources = ["*"] - actions = ["lambda:*"] - effect = "Allow" - } - "Route53DeployAccess" = { - resources = ["*"] - actions = ["route53:*", "route53domains:*", "route53resolver:*"] - effect = "Allow" - } - "SecretsManagerDeployAccess" = { - resources = ["*"] - actions = ["secretsmanager:*"] - effect = "Allow" - } - "SNSDeployAccess" = { - resources = ["*"] - actions = ["sns:*"] - effect = "Allow" - } - "SQSDeployAccess" = { - resources = ["*"] - actions = ["sqs:*"] - effect = "Allow" - } - "SecurityHubDeployAccess" = { - resources = ["*"] - actions = ["securityhub:*"] - effect = "Allow" - } - "MacieDeployAccess" = { - resources = ["*"] - actions = ["macie2:*"] - effect = "Allow" - } - "ServiceQuotaDeployAccess" = { - resources = ["*"] - actions = ["servicequotas:*"] - effect = "Allow" - } - "EKSAccess" = { - resources = ["*"] - actions = ["eks:*"] - effect = "Allow" - } - "EventBridgeAccess" = { - resources = ["*"] - actions = ["events:*"] - effect = "Allow" - } - "ApplicationAutoScalingAccess" = { - resources = ["*"] - actions = ["application-autoscaling:*"] - effect = "Allow" - } - "ApiGatewayAccess" = { - resources = ["*"] - actions = ["apigateway:*"] - effect = "Allow" - } - } -} -``` - - - -
- - -:::tip - -The permissions in the files above are provided as examples and should be adjusted to align with the specific types of infrastructure managed in the repository. This ensures that Pipelines can execute the required actions to deploy your infrastructure effectively. - -Additionally, note that the IAM permissions outlined above do not include permissions to modify the role itself, for security purposes. - -::: - -Repeat this step for each environment you would like to manage with Pipelines. - -#### Create the OIDC resources - -Use your personal AWS access to execute the following commands to deploy the infrastructure for the Terragrunt units created in the previous step. Repeat this process for each account you plan to manage with Pipelines. - - ```bash - cd $$PATH_TO_ENVIRONMENT$$/_global - terragrunt run-all plan - ``` - -Review the plan output, and if everything appears correct, proceed to apply the changes. - - - ```bash - terragrunt run-all apply - ``` - -:::tip - -If you encounter issues with the plan or apply steps due to the presence of other resources in the *_global* folder, you can run the plan/apply steps individually for the Terragrunt units. Start with the `github-actions-openid-connect-provider` unit, as other units depend on it. - -::: - -#### Commit and push the changes - -Create a new branch and commit all changes, including **`[skip ci]`** in the commit message to prevent triggering the Pipelines workflow. Push the changes to the repository, create a Pull Request, and merge the changes into the branch specified in the `.github/workflows/pipelines.yml` file. - -## Enable GitHub authentication for pipelines - -Follow the instructions in [Authenticating via GitHub App](/2.0/docs/pipelines/installation/viagithubapp) to enable GitHub authentication for Pipelines in your repository using the Gruntwork.io GitHub App. This is the recommended authentication method. Alternatively, you can [Authenticate via Machine Users](/2.0/docs/pipelines/installation/viamachineusers) if preferred. - -## Next steps - -You have successfully completed the installation of Gruntwork Pipelines in an existing repository. Proceed to [Deploying your first infrastructure change](/2.0/docs/pipelines/tutorials/deploying-your-first-infrastructure-change.md) to begin deploying changes. diff --git a/docs/2.0/docs/pipelines/installation/addingexistingrepo.mdx b/docs/2.0/docs/pipelines/installation/addingexistingrepo.mdx new file mode 100644 index 0000000000..94b88343c9 --- /dev/null +++ b/docs/2.0/docs/pipelines/installation/addingexistingrepo.mdx @@ -0,0 +1,879 @@ +# Bootstrap Pipelines in an Existing Repository + +import Tabs from '@theme/Tabs'; +import TabItem from '@theme/TabItem'; +import PersistentCheckbox from '/src/components/PersistentCheckbox'; + +This guide provides comprehensive instructions for integrating [Gruntwork Pipelines](https://gruntwork.io/products/pipelines/) into an existing repository with Infrastructure as Code (IaC). This is designed for Gruntwork customers who want to add Pipelines to their current infrastructure repositories for streamlined CI/CD management. + +To configure Gruntwork Pipelines in an existing repository, complete the following steps (which are explained in detail below): + +1. **Plan your Pipelines setup** by identifying all environments and cloud accounts/subscriptions you need to manage. +2. **Bootstrap core infrastructure** in accounts/subscriptions that don't already have the required OIDC and state management resources. +3. **Configure SCM access** using either the [Gruntwork.io GitHub App](https://github.com/apps/gruntwork-io) or [machine users](https://docs.github.com/en/get-started/learning-about-github/types-of-github-accounts#user-accounts). +4. **Create `.gruntwork` HCL configurations** to tell Pipelines how to authenticate and organize your environments. +5. **Create `.github/workflows/pipelines.yml`** to configure your GitHub Actions workflow. +6. **Commit and push** your changes to activate Pipelines. + +## Prerequisites + +Before starting, ensure you have: + +- **An active Gruntwork subscription** with Pipelines access. Verify by checking the [Gruntwork Developer Portal](https://app.gruntwork.io/account) and confirming access to "pipelines" repositories in your GitHub team. +- **Cloud provider credentials** with permissions to create OIDC providers and IAM roles in accounts where Pipelines will manage infrastructure. +- **Git installed** locally for cloning and managing your repository. +- **Existing IaC repository** with Terragrunt configurations you want to manage with Pipelines (if you are using OpenTofu/Terraform, and want to start using Terragrunt, read the [Quickstart Guide](https://terragrunt.gruntwork.io/docs/getting-started/quick-start)). + +## Planning Your Pipelines Setup + +Before implementing Pipelines, it's crucial to plan your setup by identifying all the environments and cloud resources you need to manage. + +### Identify Your Environments + +Review your existing repository structure and identify: + +1. **All environments** you want to manage with Pipelines (e.g., `dev`, `staging`, `prod`) +2. **Cloud accounts/subscriptions** associated with each environment +3. **Directory paths** in your repository that contain Terragrunt units for each environment +4. **Existing OIDC resources** that may already be provisioned in your accounts + +:::note Progress Checklist + + + + + + +::: + +### Determine Required OIDC Roles + +For each AWS Account / Azure Subscription you want to manage, you might already have some or all of the following resources provisioned. + + + + +**Required AWS Resources:** + +- An OIDC provider for GitHub Actions +- An IAM role for Pipelines to assume when running Terragrunt plan commands +- An IAM role for Pipelines to assume when running Terragrunt apply commands + + + + +**Required Azure Resources:** + +- Entra ID Application for plans with Federated Identity Credential +- Entra ID Application for applies with Federated Identity Credential +- Service Principals with appropriate role assignments +- Storage Account and Container for Terragrunt state storage (if not already existing) + + + + +:::note Progress Checklist + + + + +::: + +## Configuring SCM Access + +Pipelines needs the ability to interact with Source Control Management (SCM) platforms to fetch resources (e.g. IaC code, reusable CI/CD code and the Pipelines binary itself). + +There are two ways to configure SCM access for Pipelines: + +1. Using the [Gruntwork.io GitHub App](/2.0/docs/pipelines/installation/viagithubapp#configuration) (recommended for most GitHub users). +2. Using a [machine user](/2.0/docs/pipelines/installation/viamachineusers) (recommended for GitHub users who cannot use the GitHub App). + +:::note Progress Checklist + + + +::: + +## Bootstrapping Cloud Infrastructure + +If your AWS accounts / Azure subscriptions don't already have all the required OIDC and state management resources, you'll need to bootstrap them. This section provides the infrastructure code needed to set up these resources. + +:::tip + +If you already have all the resources listed, you can skip this section. + +If you have some of them provisioned, but not all, you can decide to either destroy the resources you already have provisioned and recreate them or import them into state. If you are not sure, please contact [Gruntwork support](/support). + +::: + +### Prepare Your Repository + +Clone your repository to your local machine using [Git](https://docs.github.com/en/repositories/creating-and-managing-repositories/cloning-a-repository) if you haven't already. + +:::tip + +If you don't have Git installed, you can install it by following the official guide for [Git installation](https://git-scm.com/downloads). + +::: + +For example: + +```bash +git clone git@github.com:acme/infrastructure-live.git +cd infrastructure-live +``` + +:::note Progress Checklist + + + + +::: + +To bootstrap your repository, we'll use Boilerplate to scaffold it with the necessary IaC code to provision the infrastructure necessary for Pipelines to function. + +The easiest way to install Boilerplate is to use `mise` to install it. + +:::tip + +If you don't have `mise` installed, you can install it by following the official guide for [mise installation](https://mise.jdx.dev/getting-started.html). + +::: + +```bash +mise use -g boilerplate@latest +``` + +:::tip + +If you'd rather install a specific version of Boilerplate, you can use the `ls-remote` command to list the available versions. + +```bash +mise ls-remote boilerplate +``` + +::: + +:::note Progress Checklist + + + +::: + +If you don't already have Terragrunt and OpenTofu installed locally, you can install them using `mise`: + +```bash +mise use -g terragrunt@latest opentofu@latest +``` + +:::note Progress Checklist + + + +::: + +### Cloud-specific bootstrap instructions + + + + +The resources you need provisioned in AWS to start managing resources with Pipelines are: + +1. An OpenID Connect (OIDC) provider +2. An IAM role for Pipelines to assume when running Terragrunt plan commands +3. An IAM role for Pipelines to assume when running Terragrunt apply commands + +For every account you want Pipelines to manage infrastructure in. + +:::tip Don't Panic! + +This may seem like a lot to set up, but the content you need to add to your repository is minimal. The majority of the work will be pulled from a reusable catalog that you'll reference in your repository. + +If you want to peruse the catalog that's used in the bootstrap process, you can take a look at the [terragrunt-scale-catalog](https://github.com/gruntwork-io/terragrunt-scale-catalog) repository. + +::: + +The process that we'll follow to get these resources ready for Pipelines is: + +1. Use Boilerplate to scaffold bootstrap configurations in your repository for each AWS account +2. Use Terragrunt to provision these resources in your AWS accounts +3. (Optionally) Bootstrap additional AWS accounts until all your AWS accounts are ready for Pipelines + +{/* We're using an h3 tag here instead of a markdown heading to avoid adding content to the ToC that won't work when switching between tabs */} +

Bootstrap Your Repository for AWS

+ +First, confirm that you have a `root.hcl` file in the root of your repository that looks something like this: + +```hcl title="root.hcl" +locals { + account_hcl = read_terragrunt_config(find_in_parent_folders("account.hcl")) + state_bucket_name = local.account_hcl.locals.state_bucket_name + + region_hcl = read_terragrunt_config(find_in_parent_folders("region.hcl")) + aws_region = local.region_hcl.locals.aws_region +} + +remote_state { + backend = "s3" + generate = { + path = "backend.tf" + if_exists = "overwrite" + } + config = { + bucket = local.state_bucket_name + region = local.aws_region + key = "${path_relative_to_include()}/tofu.tfstate" + encrypt = true + use_lockfile = true + } +} + +generate "provider" { + path = "provider.tf" + if_exists = "overwrite_terragrunt" + contents = < + +::: + +{/* We're using an h3 tag here instead of a markdown heading to avoid adding content to the ToC that won't work when switching between tabs */} +

Provision AWS Bootstrap Resources

+ +Once you've scaffolded out the accounts you want to bootstrap, you can use Terragrunt to provision the resources in each of these accounts. + +:::tip + +Make sure that you authenticate to each AWS account you are bootstrapping using AWS credentials for that account before you attempt to provision resources in it. + +You can follow the documentation [here](https://search.opentofu.org/provider/hashicorp/aws/latest#authentication-and-configuration) to authenticate with the AWS provider. You are advised to choose an authentication method that doesn't require any hard-coded credentials, like assuming an IAM role. + +::: + +For each account you want to bootstrap, you'll need to run the following commands: + +First, make sure that everything is set up correctly by running a plan in the `bootstrap` directory in `name-of-account/_global` where `name-of-account` is the name of the AWS account you want to bootstrap. + +```bash title="name-of-account/_global/bootstrap" +terragrunt run --all --non-interactive --provider-cache plan +``` + +:::tip + +We're using the `--provider-cache` flag here to ensure that we don't re-download the AWS provider on every run by leveraging the [Terragrunt Provider Cache Server](https://terragrunt.gruntwork.io/docs/features/provider-cache-server/). + +::: + +Next, apply the changes to your account. + +```bash title="name-of-account/_global/bootstrap" +terragrunt run --all --non-interactive --provider-cache apply +``` + +:::note Progress Checklist + + + + +::: + +
+ + +The resources you need provisioned in Azure to start managing resources with Pipelines are: + +1. An Azure Resource Group for OpenTofu state resources + 1. An Azure Storage Account in that resource group for OpenTofu state storage + 1. An Azure Storage Container in that storage account for OpenTofu state storage +2. An Entra ID Application to use for plans + 1. A Flexible Federated Identity Credential for the application to authenticate with your repository on any branch + 2. A Service Principal for the application to be used in role assignments + 1. A role assignment for the service principal to access the Azure subscription + 2. A role assignment for the service principal to access the Azure Storage Account +3. An Entra ID Application to use for applies + 1. A Federated Identity Credential for the application to authenticate with your repository on the deploy branch + 2. A Service Principal for the application to be used in role assignments + 1. A role assignment for the service principal to access the Azure subscription + +:::tip Don't Panic! + +This may seem like a lot to set up, but the content you need to add to your repository is minimal. The majority of the work will be pulled from a reusable catalog that you'll reference in your repository. + +If you want to peruse the catalog that's used in the bootstrap process, you can take a look at the [terragrunt-scale-catalog](https://github.com/gruntwork-io/terragrunt-scale-catalog) repository. + +::: + +The process that we'll follow to get these resources ready for Pipelines is: + +1. Use Boilerplate to scaffold bootstrap configurations in your repository for each Azure subscription +2. Use Terragrunt to provision these resources in your Azure subscription +3. Finalizing Terragrunt configurations using the bootstrap resources we just provisioned +4. Pull the bootstrap resources into state, now that we have configured a remote state backend +5. (Optionally) Bootstrap additional Azure subscriptions until all your Azure subscriptions are ready for Pipelines + +{/* We're using an h3 tag here instead of a markdown heading to avoid adding content to the ToC that won't work when switching between tabs */} +

Bootstrap Your Repository for Azure

+ +For each Azure subscription that needs bootstrapping, we'll use Boilerplate to scaffold the necessary content. Run this command from the root of your repository for each subscription: + +```bash +boilerplate \ + --template-url 'github.com/gruntwork-io/terragrunt-scale-catalog//templates/boilerplate/azure/github/subscription?ref=v1.0.0' \ + --output-folder . +``` + +:::tip + +You'll need to run this boilerplate command once for each Azure subscription you want to manage with Pipelines. Boilerplate will prompt you for subscription-specific values each time. + +::: + +:::tip + +You can reply `y` to all the prompts to include dependencies, and accept defaults unless you want to customize something. + +Alternatively, you could run Boilerplate non-interactively by passing the `--non-interactive` flag. You'll need to supply the relevant values for required variables in that case. + +e.g. + +```bash +boilerplate \ + --template-url 'github.com/gruntwork-io/terragrunt-scale-catalog//templates/boilerplate/azure/github/subscription?ref=v1.0.0' \ + --output-folder . \ + --var 'AccountName=dev' \ + --var 'GitHubOrgName=acme' \ + --var 'GitHubRepoName=infrastructure-live' \ + --var 'SubscriptionName=dev' \ + --var 'AzureTenantID=00000000-0000-0000-0000-000000000000' \ + --var 'AzureSubscriptionID=11111111-1111-1111-1111-111111111111' \ + --var 'AzureLocation=East US' \ + --var 'StateResourceGroupName=pipelines-rg' \ + --var 'StateStorageAccountName=mysa' \ + --var 'StateStorageContainerName=tfstate' \ + --non-interactive +``` + +You can also choose to store these values in a YAML file and pass it to Boilerplate using the `--var-file` flag. + +```yaml title="vars.yml" +AccountName: dev +GitHubOrgName: acme +GitHubRepoName: infrastructure-live +SubscriptionName: dev +AzureTenantID: 00000000-0000-0000-0000-000000000000 +AzureSubscriptionID: 11111111-1111-1111-1111-111111111111 +AzureLocation: East US +StateResourceGroupName: pipelines-rg +StateStorageAccountName: my-storage-account +StateStorageContainerName: tfstate +``` + +```bash +boilerplate \ + --template-url 'github.com/gruntwork-io/terragrunt-scale-catalog//templates/boilerplate/azure/github/subscription?ref=v1.0.0' \ + --output-folder . \ + --var-file vars.yml \ + --non-interactive +``` + +::: + +:::note Progress Checklist + + + +::: + +{/* We're using an h3 tag here instead of a markdown heading to avoid adding content to the ToC that won't work when switching between tabs */} +

Provision Azure Bootstrap Resources

+ +Once you've scaffolded out the subscriptions you want to bootstrap, you can use Terragrunt to provision the resources in your Azure subscription. + +If you haven't already, you'll want to authenticate to Azure using the `az` CLI. + +```bash +az login +``` + +:::note Progress Checklist + + + +::: + + +To dynamically configure the Azure provider with a given tenant ID and subscription ID, ensure that you are exporting the following environment variables if you haven't the values via the `az` CLI: + +- `ARM_TENANT_ID` +- `ARM_SUBSCRIPTION_ID` + +For example: + +```bash +export ARM_TENANT_ID="00000000-0000-0000-0000-000000000000" +export ARM_SUBSCRIPTION_ID="11111111-1111-1111-1111-111111111111" +``` + +:::note Progress Checklist + + + +::: + +First, make sure that everything is set up correctly by running a plan in the subscription directory. + +```bash title="name-of-subscription" +terragrunt run --all --non-interactive --provider-cache plan +``` + +:::tip + +We're using the `--provider-cache` flag here to ensure that we don't re-download the Azure provider on every run to speed up the process by leveraging the [Terragrunt Provider Cache Server](https://terragrunt.gruntwork.io/docs/features/provider-cache-server/). + +::: + +:::note Progress Checklist + + + +::: + +Next, apply the changes to your subscription. + +```bash title="name-of-subscription" +terragrunt run --all --non-interactive --provider-cache --no-stack-generate apply +``` + +:::tip + +We're adding the `--no-stack-generate` flag here, as Terragrunt will already have the requisite stack configurations generated, and we don't want to accidentally overwrite any configurations while we have state stored locally before we pull them into remote state. + +::: + +:::note Progress Checklist + + +::: + +{/* We're using an h3 tag here instead of a markdown heading to avoid adding content to the ToC that won't work when switching between tabs */} +

Finalizing Terragrunt configurations

+ +Once you've provisioned the resources in your Azure subscription, you can finalize the Terragrunt configurations using the bootstrap resources we just provisioned. + +First, edit the `root.hcl` file in the root of your repository to leverage the storage account we just provisioned. + +If your `root.hcl` file doesn't already have a remote state backend configuration, you'll need to add one that looks like this: + +```hcl title="root.hcl" +locals { + sub_hcl = read_terragrunt_config(find_in_parent_folders("sub.hcl")) + + state_resource_group_name = local.sub_hcl.locals.state_resource_group_name + state_storage_account_name = local.sub_hcl.locals.state_storage_account_name + state_storage_container_name = local.sub_hcl.locals.state_storage_container_name +} + +remote_state { + backend = "azurerm" + generate = { + path = "backend.tf" + if_exists = "overwrite" + } + config = { + resource_group_name = local.state_resource_group_name + storage_account_name = local.state_storage_account_name + container_name = local.state_storage_container_name + key = "${path_relative_to_include()}/tofu.tfstate" + } +} + +generate "provider" { + path = "provider.tf" + if_exists = "overwrite_terragrunt" + contents = < + +::: + +Next, finalize the `.gruntwork/environment-.hcl` file in the root of your repository to reference the IDs for the applications we just provisioned. + +You can find the values for the `plan_client_id` and `apply_client_id` by running `terragrunt stack output` in the `bootstrap` directory in `name-of-subscription/bootstrap`. + +```bash +terragrunt stack output +``` + +The relevant bits that you want to extract from the stack output are the following: + +```hcl +bootstrap = { + apply_app = { + client_id = "33333333-3333-3333-3333-333333333333" + } + plan_app = { + client_id = "44444444-4444-4444-4444-444444444444" + } +} +``` + +You can use those values to set the values for `plan_client_id` and `apply_client_id` in the `.gruntwork/environment-.hcl` file. + +:::note Progress Checklist + + + + +::: + +{/* We're using an h3 tag here instead of a markdown heading to avoid adding content to the ToC that won't work when switching between tabs */} +

Pulling the resources into state

+ +Once you've provisioned the resources in your Azure subscription, you can pull the resources into state using the storage account we just provisioned. + +```bash title="name-of-subscription" +terragrunt run --all --non-interactive --provider-cache --no-stack-generate -- init -migrate-state -force-copy +``` + +:::tip + +We're adding the `-force-copy` flag here to avoid any issues with OpenTofu waiting for an interactive prompt to copy up local state. + +::: + +:::note Progress Checklist + + + +::: + +
+
+ +## Creating `.gruntwork` HCL Configurations + +Create [HCL configurations](/2.0/reference/pipelines/configurations-as-code/) in the `.gruntwork` directory in the root of your repository to tell Pipelines how you plan to organize your infrastructure, and how you plan to have Pipelines authenticate with your cloud provider(s). + +### The `repository` block + +The core configuration that you'll want to start with is the `repository` block. This block tells Pipelines which branch has the "live" infrastructure you want provisioned. When you merge IaC to this branch, Pipelines will be triggered to update your infrastructure accordingly. + +```hcl title=".gruntwork/repository.hcl" +repository { + deploy_branch_name = "main" +} +``` + +:::note Progress Checklist + + + + +::: + +### The `environment` block + +Next, you'll want to define the environments you want to manage with Pipelines using the [`environment` block](/2.0/reference/pipelines/configurations-as-code/api#environment-block). + +For each environment, you'll want to define a [`filter` block](/2.0/reference/pipelines/configurations-as-code/api#filter-block) that tells Pipelines which units are part of that environment. You'll also want to define an [`authentication` block](/2.0/reference/pipelines/configurations-as-code/api#authentication-block) that tells Pipelines how to authenticate with your cloud provider(s) for that environment. + + + + +```hcl title=".gruntwork/environment-production.hcl" +environment "production" { + filter { + paths = ["prod/*"] + } + + authentication { + aws_oidc { + account_id = "123456789012" + plan_iam_role_arn = "arn:aws:iam::123456789012:role/pipelines-plan" + apply_iam_role_arn = "arn:aws:iam::123456789012:role/pipelines-apply" + } + } +} +``` + +:::tip + +Learn more about how Pipelines authenticates to AWS in the [Authenticating to AWS](/2.0/docs/pipelines/concepts/cloud-auth/aws) page. + +::: + +:::tip + +Check out the [aws block](/2.0/reference/pipelines/configurations-as-code/#aws-blocks) for more information on how to configure Pipelines to reuse common AWS configurations. + +::: + +:::note Progress Checklist + + + + + + + +::: + + + + +```hcl title=".gruntwork/environment-production.hcl" +environment "production" { + filter { + paths = ["prod/*"] + } + + authentication { + azure_oidc { + tenant_id = "00000000-0000-0000-0000-000000000000" + subscription_id = "11111111-1111-1111-1111-111111111111" + + plan_client_id = "33333333-3333-3333-3333-333333333333" + apply_client_id = "44444444-4444-4444-4444-444444444444" + } + } +} +``` + +:::tip + +Learn more about how Pipelines authenticates to Azure in the [Authenticating to Azure](/2.0/docs/pipelines/concepts/cloud-auth/azure) page. + +::: + +:::note Progress Checklist + + + + + + + + +::: + + + + +```hcl title=".gruntwork/environment-production.hcl" +environment "production" { + filter { + paths = ["prod/*"] + } + + authentication { + custom { + auth_provider_cmd = "./scripts/custom-auth-prod.sh" + } + } +} +``` + +:::tip + +Learn more about how Pipelines can authenticate with custom authentication in the [Custom Authentication](/2.0/docs/pipelines/concepts/cloud-auth/custom) page. + +::: + +:::note Progress Checklist + + + + + + + +::: + + + + +## Creating `.github/workflows/pipelines.yml` + +Create a `.github/workflows/pipelines.yml` file in the root of your repository with the following content: + +```yaml title=".github/workflows/pipelines.yml" +name: Pipelines +run-name: "[GWP]: ${{ github.event.commits[0].message || github.event.pull_request.title || 'No commit message' }}" +on: + push: + branches: + - main + paths-ignore: + - ".github/**" + pull_request: + types: + - opened + - synchronize + - reopened + paths-ignore: + - ".github/**" + +# Permissions to assume roles and create pull requests +permissions: + id-token: write + contents: write + pull-requests: write + +jobs: + GruntworkPipelines: + uses: gruntwork-io/pipelines-workflows/.github/workflows/pipelines.yml@v4 +``` + +:::tip + +You can read the [Pipelines GitHub Actions Workflow](https://github.com/gruntwork-io/pipelines-workflows/blob/main/.github/workflows/pipelines.yml) to learn how this GitHub Actions workflow calls the Pipelines CLI to run your pipelines. + +::: + +:::note Progress Checklist + + + + +::: + +## Commit and Push Your Changes + +Commit and push your changes to your repository. + +:::note + +You should include `[skip ci]` in your commit message here to prevent triggering the Pipelines workflow before everything is properly configured. + +::: + +```bash +git add . +git commit -m "Add Pipelines configurations and GitHub Actions workflow [skip ci]" +git push +``` + +:::note Progress Checklist + + + + +::: + +🚀 You've successfully added Gruntwork Pipelines to your existing repository! + +## Next Steps + +You have successfully completed the installation of Gruntwork Pipelines in an existing repository. Proceed to [Deploying your first infrastructure change](/2.0/docs/pipelines/tutorials/deploying-your-first-infrastructure-change.md) to begin deploying changes. + +## Troubleshooting Tips + +If you encounter issues during the setup process, here are some common troubleshooting steps: + +### Bootstrap Resources Failure + +If your bootstrap resource provisioning fails: + + + + + + + +### HCL Configuration Issues + +If your HCL configurations aren't working as expected: + + + + + +### GitHub Actions Workflow Issues + +If your GitHub Actions workflow isn't working as expected: + + + + + + + + diff --git a/docs/2.0/docs/pipelines/installation/addinggitlabrepo.md b/docs/2.0/docs/pipelines/installation/addinggitlabrepo.md deleted file mode 100644 index 142a0b577a..0000000000 --- a/docs/2.0/docs/pipelines/installation/addinggitlabrepo.md +++ /dev/null @@ -1,192 +0,0 @@ -import CustomizableValue from '/src/components/CustomizableValue'; - -# Adding Pipelines to an existing GitLab Project - -This guide walks you through the process of adding Gruntwork Pipelines to a GitLab project. By the end, you'll have a fully configured GitLab CI/CD pipeline that can deploy infrastructure changes automatically. - -## Prerequisites - -Before you begin, make sure you have: - -- Basic familiarity with Git, GitLab, and infrastructure as code concepts -- Access to one (or many) AWS account(s) where you have permission to create IAM roles and OIDC providers -- Completed the [Pipelines Auth setup for GitLab](/2.0/docs/pipelines/installation/viamachineusers#gitlab) and setup a machine user with appropriate PAT tokens -- Local access to Gruntwork's GitHub repositories, specifically the [architecture catalog](https://github.com/gruntwork-io/terraform-aws-architecture-catalog/) - -:::info - -**For custom GitLab instances only**: You must [fork](https://docs.gitlab.com/user/project/repository/forking_workflow/#create-a-fork) Gruntwork's public [Pipelines workflow project](https://gitlab.com/gruntwork-io/pipelines-workflows) into your own GitLab instance. - -This is necessary because Gruntwork Pipelines uses [GitLab CI/CD components](/2.0/docs/pipelines/architecture/ci-workflows), and GitLab requires components to reside within the [same GitLab instance as the project referencing them](https://docs.gitlab.com/ci/components/#use-a-component). - -When creating the fork, we recommend configuring it as a public mirror of the original Gruntwork project and ensuring that tags are included. -::: - -## Setup Process Overview - -Setting up Gruntwork Pipelines for GitLab involves these main steps: - -(prerequisite) Complete the [Pipelines Auth setup for GitLab](/2.0/docs/pipelines/installation/viamachineusers#gitlab) - -1. [Authorize Your GitLab Group with Gruntwork](#step-1-authorize-your-gitlab-group-with-gruntwork) -2. [Install required tools (mise, boilerplate)](#step-2-install-required-tools) -3. [Install Gruntwork Pipelines in Your Repository](#step-3-install-gruntwork-pipelines-in-your-repository) -4. [Install AWS OIDC Provider and IAM Roles for Pipelines](#step-4-install-aws-oidc-provider-and-iam-roles-for-pipelines) -5. [Complete the setup](#step-5-complete-the-setup) - -## Detailed Setup Instructions - -### Step 0: Ensure OIDC configuration and JWKS are publicly accessible - -This step only applies if you are using a self-hosted GitLab instance that is not accessible from the public internet. If you are using GitLab.com or a self-hosted instance that is publicly accessible, you can skip this step. - -1. [Follow GitLab's instructions](https://docs.gitlab.com/ci/cloud_services/aws/#configure-a-non-public-gitlab-instance) for hosting your OIDC configuration and JWKS in a public location (e.g. S3 Bucket). This is necessary for both Gruntwork and the AWS OIDC provider to access the GitLab OIDC configuration and JWKS when authenticating JWT's generated by your custom instance. -2. Note the (stored as `ci_id_tokens_issuer_url` in your `gitlab.rb` file per GitLab's instructions) generated above for reuse in the next steps. - - -### Step 1: Authorize Your GitLab Group with Gruntwork - -To use Gruntwork Pipelines with GitLab, your group needs authorization from Gruntwork: - -1. Email your Gruntwork account manager or support@gruntwork.io with: - - ``` - GitLab group name(s): $$GITLAB_GROUP_NAME$$ (e.g. acme-io) - GitLab Issuer URL: $$ISSUER_URL$$ (For most users this is the URL of your GitLab instance e.g. https://gitlab.acme.io. If your instance is not publicly accessible, this should be a separate URL that is publicly accessible per step 0, e.g. https://s3.amazonaws.com/YOUR_BUCKET_NAME/) - Organization name: $$ORGANIZATION_NAME$$ (e.g. Acme, Inc.) - ``` - -2. Wait for confirmation that your group has been authorized. - -### Step 2: Install Required Tools - -First, you'll need to install [mise](https://mise.jdx.dev/), a powerful environment manager that will help set up the required tools: - -1. Install mise by following the [getting started guide](https://mise.jdx.dev/getting-started.html) - -2. Activate mise in your shell: - ```bash - # For Bash - echo 'eval "$(~/.local/bin/mise activate bash)"' >> ~/.bashrc - - # For Zsh - echo 'eval "$(~/.local/bin/mise activate zsh)"' >> ~/.zshrc - - # For Fish - echo 'mise activate fish | source' >> ~/.config/fish/config.fish - ``` - -3. Install the boilerplate tool, which will generate the project structure: - ```bash - # For mise version BEFORE 2025.2.10 - mise plugin add boilerplate https://github.com/gruntwork-io/asdf-boilerplate.git - - # For mise version 2025.2.10+ - mise plugin add boilerplate - - mise use boilerplate@0.6.0 - ``` - -4. Verify the installation: - ```bash - boilerplate --version - - # If that doesn't work, try: - mise x -- boilerplate --version - - # If that still doesn't work, check where boilerplate is installed: - mise which boilerplate - ``` - -### Step 3: Install Gruntwork Pipelines in Your Repository - -1. Identify where you want to install Gruntwork Pipelines, for example create a new project/repository in your GitLab group (or use an existing one) named - -2. Clone the repository to your local machine if it's not already cloned: - ```bash - git clone git@gitlab.com:$$GITLAB_GROUP_NAME$$/$$REPOSITORY_NAME$$.git - cd $$REPOSITORY_NAME$$ - ``` -3. Create a new branch for your changes: - ```bash - git checkout -b gruntwork-pipelines - ``` - -4. Download the sample [vars.yaml file](https://github.com/gruntwork-io/terraform-aws-architecture-catalog/blob/main/examples/gitlab-pipelines/vars.yaml) to the root of - -4. Edit the `vars.yaml` file to customize it for your environment. If using a custom GitLab instance, update any custom instance variables. - -5. `cd` to the root of where you wish to install Gruntwork Pipelines. Run the boilerplate tool to generate your repository structure: - ```bash - boilerplate --template-url "git@github.com:gruntwork-io/terraform-aws-architecture-catalog.git//templates/gitlab-pipelines-infrastructure-live-root/?ref=v3.1.0" --output-folder . --var-file vars.yaml --non-interactive - ``` - - If you encounter SSH issues, verify your SSH access to GitHub: - ```bash - ssh -T git@github.com - # or try cloning manually - git clone git@github.com:gruntwork-io/terraform-aws-architecture-catalog.git - ``` - -6. Commit the changes: - ```bash - git add . - git commit -m "[skip ci] Add Gruntwork Pipelines" - git push origin gruntwork-pipelines - ``` - -7. Create a merge request in GitLab and review the changes. - -### Step 4: Install AWS OIDC Provider and IAM Roles for Pipelines - -1. Navigate to the `_global` folder under each account in your repository and review the Terragrunt files that were created: - - The GitLab OIDC identity provider in AWS. - - :::note - If using a custom GitLab instance, ensure the `URL` and `audiences` inputs in this configuration are correct. - ::: - - - IAM roles for your the account (`root-pipelines-plan` and `root-pipelines-apply`) - -2. Apply these configurations to create the required AWS resources: - ```bash - cd $$ACCOUNT_NAME$$/_global/ - terragrunt run-all plan - terragrunt run-all apply - ``` - - :::note - - In the event you already have an OIDC provider for your SCM in the AWS account you can import the existing one: - - ``` - cd _global/$$ACCOUNT_NAME$$/gitlab-pipelines-openid-connect-provider/ - terragrunt import "aws_iam_openid_connect_provider.gitlab" "ARN_OF_EXISTING_OIDC_PROVIDER" - ``` - - - ::: - -### Step 5: Complete the Setup - -1. Return to GitLab and merge the merge request with your changes. -2. Ensure that `PIPELINES_GITLAB_TOKEN` and `PIPELINES_GITLAB_READ_TOKEN` are set as a CI/CD variables in your group or project if you haven't already (see the [Machine Users setup guide](/2.0/docs/pipelines/installation/viamachineusers#gitlab) for details). -3. Test your setup by creating a new branch with some sample infrastructure code and creating a merge request. - -## Next Steps - -After setting up Pipelines, you can: - -- [Deploy your first infrastructure change](/2.0/docs/pipelines/tutorials/deploying-your-first-infrastructure-change) -- [Learn how to run plan and apply operations](/2.0/docs/pipelines/guides/running-plan-apply) -- [Extend Pipelines with custom actions](/2.0/docs/pipelines/guides/extending-pipelines) - -## Troubleshooting - -If you encounter issues during setup: - -- Ensure your GitLab CI user has the correct permissions to your group and projects -- Verify that both `PIPELINES_GITLAB_TOKEN` and `PIPELINES_GITLAB_READ_TOKEN` are set correctly as CI/CD variables and are *NOT* marked as protected -- Confirm your GitLab group has been authorized by Gruntwork for Pipelines usage - -For further assistance, contact [support@gruntwork.io](mailto:support@gruntwork.io). diff --git a/docs/2.0/docs/pipelines/installation/addinggitlabrepo.mdx b/docs/2.0/docs/pipelines/installation/addinggitlabrepo.mdx new file mode 100644 index 0000000000..afe386192a --- /dev/null +++ b/docs/2.0/docs/pipelines/installation/addinggitlabrepo.mdx @@ -0,0 +1,392 @@ +# Bootstrap Pipelines in a New GitLab Project + +import Tabs from '@theme/Tabs'; +import TabItem from '@theme/TabItem'; +import PersistentCheckbox from '/src/components/PersistentCheckbox'; +import CustomizableValue from '/src/components/CustomizableValue'; + +To configure Gruntwork Pipelines in a new GitLab project, complete the following steps (which are explained in detail below): + +1. (If using a self-hosted GitLab instance) Ensure OIDC configuration and JWKS are publicly accessible. +2. Create an `infrastructure-live` project. +3. Configure machine user tokens for GitLab access, or ensure that the appropriate machine user tokens are set up as project or organization secrets. +4. Create `.gruntwork` HCL configurations to tell Pipelines how to authenticate in your environments. +5. Create `.gitlab-ci.yml` to tell your GitLab CI/CD pipeline how to run your pipelines. +6. Commit and push your changes to your project. + +## Ensure OIDC configuration and JWKS are publicly accessible + +This step only applies if you are using a self-hosted GitLab instance that is not accessible from the public internet. If you are using GitLab.com or a self-hosted instance that is publicly accessible, you can skip this step. + +1. [Follow GitLab's instructions](https://docs.gitlab.com/ci/cloud_services/aws/#configure-a-non-public-gitlab-instance) for hosting your OIDC configuration and JWKS in a public location (e.g. S3 Bucket). This is necessary for both Gruntwork and the AWS OIDC provider to access the GitLab OIDC configuration and JWKS when authenticating JWT's generated by your custom instance. +2. Note the (stored as `ci_id_tokens_issuer_url` in your `gitlab.rb` file per GitLab's instructions) generated above for reuse in the next steps. + +:::note Progress Checklist + + + +::: + +## Creating the infrastructure-live project + +Creating an `infrastructure-live` project is fairly straightforward. First, create a new project using the official GitLab documentation for [creating repositories](https://docs.gitlab.com/user/project/repository/). Name the project something like `infrastructure-live` and make it private (or internal). + +## Configuring SCM Access + +Pipelines needs the ability to interact with Source Control Management (SCM) platforms to fetch resources (e.g. IaC code, reusable CI/CD code and the Pipelines binary itself). + +For GitLab, you'll need to configure SCM access using [machine users](/2.0/docs/pipelines/installation/viamachineusers) with appropriate Personal Access Tokens (PATs). + +:::note Progress Checklist + + + +::: + +## Creating Cloud Resources for Pipelines + +To start using Pipelines, you'll need to ensure that requisite cloud resources are provisioned in your cloud provider(s) to start managing your infrastructure with Pipelines. + +:::note + +If you are using the [Gruntwork Account Factory](/2.0/docs/accountfactory/architecture), this will be done automatically during onboarding and in the process of [vending every new AWS account](/2.0/docs/accountfactory/guides/vend-aws-account), so you don't need to worry about this. + +::: + +Clone your `infrastructure-live` project repository to your local machine using [Git](https://docs.gitlab.com/user/project/repository/index.html#clone-a-repository). + +:::tip + +If you don't have Git installed, you can install it by following the official guide for [Git installation](https://git-scm.com/downloads). + +::: + +For example: + +```bash +git clone git@gitlab.com:acme/infrastructure-live.git +cd infrastructure-live +``` + +:::note Progress Checklist + + + + +::: + +To bootstrap your `infrastructure-live` repository, we'll use Boilerplate to scaffold it with the necessary IaC code to provision the infrastructure necessary for Pipelines to function. + +The easiest way to install Boilerplate is to use `mise` to install it. + +:::tip + +If you don't have `mise` installed, you can install it by following the official guide for [mise installation](https://mise.jdx.dev/getting-started.html). + +::: + +```bash +mise use -g boilerplate@latest +``` + +:::tip + +If you'd rather install a specific version of Boilerplate, you can use the `ls-remote` command to list the available versions. + +```bash +mise ls-remote boilerplate +``` + +::: + +:::note Progress Checklist + + + +::: + +### Cloud-specific bootstrap instructions + +The resources that you need provisioned in AWS to start managing resources with Pipelines are: + +1. An OpenID Connect (OIDC) provider +2. An IAM role for Pipelines to assume when running Terragrunt plan commands +3. An IAM role for Pipelines to assume when running Terragrunt apply commands + +For every account you want Pipelines to manage infrastructure in. + +:::tip Don't Panic! + +This may seem like a lot to set up, but the content you need to add to your `infrastructure-live` repository is minimal. The majority of the work will be pulled from a reusable catalog that you'll reference in your `infrastructure-live` repository. + +If you want to peruse the catalog that's used in the bootstrap process, you can take a look at the [terragrunt-scale-catalog](https://github.com/gruntwork-io/terragrunt-scale-catalog) repository. + +::: + +The process that we'll follow to get these resources ready for Pipelines is: + +1. Set up the Terragrunt configurations in your `infrastructure-live` repository for bootstrapping Pipelines in a single AWS account +2. Use Terragrunt to provision these resources in your AWS account +3. (Optionally) Bootstrap additional AWS accounts until all your AWS accounts are ready for Pipelines + +{/* We're using an h3 tag here instead of a markdown heading to avoid adding content to the ToC that won't work when switching between tabs */} +

Bootstrap your `infrastructure-live` repository

+ +To bootstrap your `infrastructure-live` repository, we'll use Boilerplate to scaffold it with the necessary content for Pipelines to function. + +```bash +boilerplate \ + --template-url 'github.com/gruntwork-io/terragrunt-scale-catalog//templates/boilerplate/aws/gitlab/infrastructure-live?ref=v1.0.0' \ + --output-folder . +``` + +:::tip + +You can just reply `y` to all the prompts to include dependencies, and accept defaults unless you want to customize something. + +Alternatively, you could run Boilerplate non-interactively by passing the `--non-interactive` flag. You'll need to supply the relevant values for required variables in that case. + +e.g. + +```bash +boilerplate \ + --template-url 'github.com/gruntwork-io/terragrunt-scale-catalog//templates/boilerplate/aws/gitlab/infrastructure-live?ref=v1.0.0' \ + --output-folder . \ + --var 'AccountName=dev' \ + --var 'GitLabGroupName=acme' \ + --var 'GitLabRepoName=infrastructure-live' \ + --var 'GitLabInstanceURL=https://gitlab.com' \ + --var 'AWSAccountID=123456789012' \ + --var 'AWSRegion=us-east-1' \ + --var 'StateBucketName=my-state-bucket' \ + --non-interactive +``` + +You can also choose to store these values in a YAML file and pass it to Boilerplate using the `--var-file` flag. + +```yaml title="vars.yml" +AccountName: dev +GitLabGroupName: acme +GitLabRepoName: infrastructure-live +GitLabInstanceURL: https://gitlab.com +AWSAccountID: 123456789012 +AWSRegion: us-east-1 +StateBucketName: my-state-bucket +``` + +```bash +boilerplate \ + --template-url 'github.com/gruntwork-io/terragrunt-scale-catalog//templates/boilerplate/aws/gitlab/infrastructure-live?ref=v1.0.0' \ + --output-folder . \ + --var-file vars.yml \ + --non-interactive +``` + +If you're using a self-hosted GitLab instance, you'll want to make sure the issuer is set correctly when calling Boilerplate. + +```bash +boilerplate \ + --template-url 'github.com/gruntwork-io/terragrunt-scale-catalog//templates/boilerplate/aws/gitlab/infrastructure-live?ref=v1.0.0' \ + --output-folder . \ + --var 'AccountName=dev' \ + --var 'GitLabGroupName=acme' \ + --var 'GitLabRepoName=infrastructure-live' \ + --var 'GitLabInstanceURL=https://gitlab.com' \ + --var 'AWSAccountID=123456789012' \ + --var 'AWSRegion=us-east-1' \ + --var 'StateBucketName=my-state-bucket' \ + --var 'Issuer=$$ISSUER_URL$$' \ + --non-interactive +``` + +::: + +:::note Progress Checklist + + + +::: + +Next, install Terragrunt and OpenTofu locally (the `.mise.toml` file in the root of the repository after scaffolding should already be set to the versions you want for Terragrunt and OpenTofu): + +```bash +mise install +``` + +:::note Progress Checklist + + +::: + +{/* We're using an h3 tag here instead of a markdown heading to avoid adding content to the ToC that won't work when switching between tabs */} +

Provisioning the resources

+ +Once you've set up the Terragrunt configurations, you can use Terragrunt to provision the resources in your AWS account. + +:::tip + +Make sure that you're authenticated with AWS locally before proceeding. + +You can follow the documentation [here](https://search.opentofu.org/provider/hashicorp/aws/latest#authentication-and-configuration) to authenticate with the AWS provider. You are advised to choose an authentication method that doesn't require any hard-coded credentials, like assuming an IAM role. + +::: + +First, make sure that everything is set up correctly by running a plan in the `bootstrap` directory in `name-of-account/_global` where `name-of-account` is the name of the first AWS account you want to bootstrap. + +```bash title="name-of-account/_global/bootstrap" +terragrunt run --all --non-interactive --provider-cache plan +``` + +:::tip + +We're using the `--provider-cache` flag here to ensure that we don't re-download the AWS provider on every run by leveraging the [Terragrunt Provider Cache Server](https://terragrunt.gruntwork.io/docs/features/provider-cache-server/). + +::: + +:::note Progress Checklist + + + +::: + +Next, apply the changes to your account. + +```bash title="name-of-account/_global/bootstrap" +terragrunt run --all --non-interactive --provider-cache apply +``` + +:::note Progress Checklist + + + +::: + +{/* We're using an h3 tag here instead of a markdown heading to avoid adding content to the ToC that won't work when switching between tabs */} +

Optional: Bootstrapping additional AWS accounts

+ +If you have multiple AWS accounts, and you want to bootstrap them as well, you can do so by following a similar, but slightly condensed process. + +For each additional account you want to bootstrap, you'll use Boilerplate in the root of your `infrastructure-live` repository to scaffold out the necessary content for just that account. + +:::tip + +If you are going to bootstrap more AWS accounts, you'll probably want to commit your existing changes before proceeding. + +```bash +git add . +git commit -m "Add core Pipelines scaffolding [skip ci]" +``` + +The `[skip ci]` in the commit message is just in-case you push your changes up to your repository at this state, as you don't want to trigger Pipelines yet. + +::: + +Just like before, you'll use Boilerplate to scaffold out the necessary content for just that account. + +```bash +boilerplate \ + --template-url 'github.com/gruntwork-io/terragrunt-scale-catalog//templates/boilerplate/aws/gitlab/infrastructure-live?ref=v1.0.0' \ + --output-folder . +``` + +:::tip + +Again, you can just reply `y` to all the prompts to include dependencies, and accept defaults unless you want to customize something. + +Alternatively, you could run Boilerplate non-interactively by passing the `--non-interactive` flag. You'll need to supply the relevant values for required variables in that case. + +e.g. + +```bash +boilerplate \ + --template-url 'github.com/gruntwork-io/terragrunt-scale-catalog//templates/boilerplate/aws/gitlab/account?ref=v1.0.0' \ + --output-folder . \ + --var 'AccountName=prod' \ + --var 'AWSAccountID=987654321012' \ + --var 'AWSRegion=us-east-1' \ + --var 'StateBucketName=my-prod-state-bucket' \ + --var 'GitLabGroupName=acme' \ + --var 'GitLabRepoName=infrastructure-live' \ + --var 'GitLabInstanceURL=https://gitlab.com' \ + --non-interactive +``` + +If you prefer to store the values in a YAML file and pass it to Boilerplate using the `--var-file` flag, you can do so like this: + +```yaml title="vars.yml" +AccountName: prod +AWSAccountID: 987654321012 +AWSRegion: us-east-1 +StateBucketName: my-prod-state-bucket +GitLabGroupName: acme +GitLabRepoName: infrastructure-live +GitLabInstanceURL: https://gitlab.com +``` + +```bash +boilerplate \ + --template-url 'github.com/gruntwork-io/terragrunt-scale-catalog//templates/boilerplate/aws/gitlab/account?ref=v1.0.0' \ + --output-folder . \ + --var-file vars.yml \ + --non-interactive +``` + +::: + +:::note Progress Checklist + + + +::: + +Once you've scaffolded out the additional accounts you want to bootstrap, you can use Terragrunt to provision the resources in each of these accounts. + +:::tip + +Make sure that you authenticate to each AWS account you are bootstrapping using AWS credentials for that account before you attempt to provision resources in it. + +::: + +For each account you want to bootstrap, you'll need to run the following commands: + +```bash +cd /_global/bootstrap +terragrunt run --all --non-interactive --provider-cache plan +terragrunt run --all --non-interactive --provider-cache apply +``` + +:::note Progress Checklist + + + + +::: + +## Commit and push your changes + +Commit and push your changes to your repository. + + :::note + +You should include `[skip ci]` in your commit message here to prevent triggering the Pipelines workflow. + +::: + +```bash +git add . +git commit -m "Add Pipelines GitLab CI workflow [skip ci]" +git push +``` + +:::note Progress Checklist + + + + +::: + +🚀 You've successfully added Gruntwork Pipelines to your new repository! + +## Next steps + +You have successfully completed the installation of Gruntwork Pipelines in a new repository. Proceed to [Deploying your first infrastructure change](/2.0/docs/pipelines/tutorials/deploying-your-first-infrastructure-change.md) to begin deploying changes. diff --git a/docs/2.0/docs/pipelines/installation/addingnewgitlabrepo.md b/docs/2.0/docs/pipelines/installation/addingnewgitlabrepo.md deleted file mode 100644 index 024a2c5183..0000000000 --- a/docs/2.0/docs/pipelines/installation/addingnewgitlabrepo.md +++ /dev/null @@ -1,547 +0,0 @@ -import CustomizableValue from '/src/components/CustomizableValue'; - -# Creating a New GitLab Project with Pipelines - -This guide walks you through the process of setting up a new GitLab Project with the Gruntwork Platform. By the end, you'll have a fully configured GitLab CI/CD pipeline that can create new AWS accounts and deploy infrastructure changes automatically. - -:::info -To use Gruntwork Pipelines in an **existing** GitLab repository, see this [guide](/2.0/docs/pipelines/installation/addinggitlabrepo). -::: - -## Prerequisites - -Before you begin, make sure you have: - -- Basic familiarity with Git, GitLab, and infrastructure as code concepts -- Completed the [AWS Landing Zone setup](/2.0/docs/pipelines/installation/prerequisites/awslandingzone) -- Have programmatic access to the AWS accounts created in the [AWS Landing Zone setup](/2.0/docs/pipelines/installation/prerequisites/awslandingzone) -- Completed the [Pipelines Auth setup for GitLab](/2.0/docs/pipelines/installation/viamachineusers#gitlab) and setup a machine user with appropriate PAT tokens -- Local access to Gruntwork's GitHub repositories, specifically the [architecture catalog](https://github.com/gruntwork-io/terraform-aws-architecture-catalog/) - -
-Additional setup for **custom GitLab instances only** - -### Fork the Pipelines workflow project - -You must [fork](https://docs.gitlab.com/user/project/repository/forking_workflow/#create-a-fork) Gruntwork's public [Pipelines workflow project](https://gitlab.com/gruntwork-io/pipelines-workflows) into your own GitLab instance. -This is necessary because Gruntwork Pipelines uses [GitLab CI/CD components](/2.0/docs/pipelines/architecture/ci-workflows), and GitLab requires components to reside within the [same GitLab instance as the project referencing them](https://docs.gitlab.com/ci/components/#use-a-component). - -When creating the fork, we recommend configuring it as a public mirror of the original Gruntwork project and ensuring that tags are included. - -### Ensure OIDC configuration and JWKS are publicly accessible - -This step only applies if you are using a self-hosted GitLab instance that is not accessible from the public internet. If you are using GitLab.com or a self-hosted instance that is publicly accessible, you can skip this step. - -1. [Follow GitLab's instructions](https://docs.gitlab.com/ci/cloud_services/aws/#configure-a-non-public-gitlab-instance) for hosting your OIDC configuration and JWKS in a public location (e.g. S3 Bucket). This is necessary for both Gruntwork and the AWS OIDC provider to access the GitLab OIDC configuration and JWKS when authenticating JWT's generated by your custom instance. -2. Note the (stored as `ci_id_tokens_issuer_url` in your `gitlab.rb` file per GitLab's instructions) generated above for reuse in the next steps. -
- -1. Create a new GitLab project for your `infrastructure-live-root` repository. -1. Install dependencies. -1. Configure the variables required to run the infrastructure-live-root boilerplate template. -1. Create your `infrastructure-live-root` repository contents using Gruntwork's architecture-catalog template. -1. Apply the account baselines to your AWS accounts. - - -## Create a new infrastructure-live-root - -### Authorize Your GitLab Group with Gruntwork - -To use Gruntwork Pipelines with GitLab, your group needs authorization from Gruntwork. Email your Gruntwork account manager or support@gruntwork.io with: - - ``` - GitLab group name(s): $$GITLAB_GROUP_NAME$$ (e.g. acme-io) - GitLab Issuer URL: $$ISSUER_URL$$ (For most users this is the URL of your GitLab instance e.g. https://gitlab.acme.io, if your instance is not publicly accessible, this should be a separate URL that is publicly accessible per step 0, e.g. https://s3.amazonaws.com/YOUR_BUCKET_NAME/) - Organization name: $$ORGANIZATION_NAME$$ (e.g. Acme, Inc.) - ``` - -Continue with the rest of the guide while you await confirmation when your group has been authorized. - -### Create a new GitLab project - -1. Navigate to the group. -1. Click the **New Project** button. -1. Enter a name for the project. e.g. infrastructure-live-root -1. Click **Create Project**. -1. Clone the project to your local machine. -1. Navigate to the project directory. -1. Create a new branch `bootstrap-repository`. - -### Install dependencies - -1. Install [mise](https://mise.jdx.dev/getting-started.html) on your machine. -1. Activate mise in your shell: - - ```bash - # For Bash - echo 'eval "$(~/.local/bin/mise activate bash)"' >> ~/.bashrc - - # For Zsh - echo 'eval "$(~/.local/bin/mise activate zsh)"' >> ~/.zshrc - - # For Fish - echo 'mise activate fish | source' >> ~/.config/fish/config.fish - ``` - -1. Add the following to a .mise.toml file in the root of your project: - - ```toml title=".mise.toml" - [tools] - boilerplate = "0.8.1" - opentofu = "1.10.0" - terragrunt = "0.81.6" - awscli = "latest" - ``` - -1. Run `mise install`. - - -### Bootstrap the repository - -Gruntwork provides a boilerplate [template](https://github.com/gruntwork-io/terraform-aws-architecture-catalog/tree/main/templates/devops-foundations-infrastructure-live-root) that incorporates best practices while allowing for customization. The template is designed to scaffold a best-practices Terragrunt configurations. It includes patterns for module defaults, global variables, and account baselines. Additionally, it integrates Gruntwork Pipelines. - -#### Configure the variables required to run the boilerplate template - -Copy the content below to a `vars.yaml` file in the root of your project and update the `` values with your own. - -```yaml title="vars.yaml" -SCMProvider: GitLab - -# The GitLab group to use for the infrastructure repositories. This should include any additional sub-groups in the name -# Example: acme/prod -SCMProviderGroup: $$GITLAB_GROUP_NAME$$ - -# The GitLab project to use for the infrastructure-live repository. -SCMProviderRepo: infrastructure-live-root - -# The base URL of your GitLab group repos. E.g., gitlab.com/ -RepoBaseUrl: $$GITLAB_GROUP_REPO_BASE_URL$$ - -# The name of the branch to deploy to. -# Example: main -DeployBranchName: $$DEPLOY_BRANCH_NAME$$ - -# The AWS account ID for the management account -# Example: "123456789012" -AwsManagementAccountId: $$AWS_MANAGEMENT_ACCOUNT_ID$$ - -# The AWS account ID for the security account -# Example: "123456789013" -AwsSecurityAccountId: $$AWS_SECURITY_ACCOUNT_ID$$ - -# The AWS account ID for the logs account -# Example: "123456789014" -AwsLogsAccountId: $$AWS_LOGS_ACCOUNT_ID$$ - -# The AWS account ID for the shared account -# Example: "123456789015" -AwsSharedAccountId: $$AWS_SHARED_ACCOUNT_ID$$ - -# The AWS account Email for the logs account -# Example: logs@acme.com -AwsLogsAccountEmail: $$AWS_LOGS_ACCOUNT_EMAIL$$ - -# The AWS account Email for the management account -# Example: management@acme.com -AwsManagementAccountEmail: $$AWS_MANAGEMENT_ACCOUNT_EMAIL$$ - -# The AWS account Email for the security account -# Example: security@acme.com -AwsSecurityAccountEmail: $$AWS_SECURITY_ACCOUNT_EMAIL$$ - -# The AWS account Email for the shared account -# Example: shared@acme.com -AwsSharedAccountEmail: $$AWS_SHARED_ACCOUNT_EMAIL$$ - -# The name prefix to use for creating resources e.g S3 bucket for OpenTofu state files -# Example: acme -OrgNamePrefix: $$ORG_NAME_PREFIX$$ - -# The default region for AWS Resources -# Example: us-east-1 -DefaultRegion: $$DEFAULT_REGION$$ - -################################################################################ -# OPTIONAL VARIABLES WITH THEIR DEFAULT VALUES. UNCOMMENT AND MODIFY IF NEEDED. -################################################################################ - -# List of the git repositories to populate for the catalog -# CatalogRepositories: -# - github.com/gruntwork-io/terraform-aws-service-catalog - -# The AWS partition to use. Options: aws, aws-us-gov -# AWSPartition: aws - -# The name of the IAM role to use for the plan job. -# PlanIAMRoleName: root-pipelines-plan - -# The name of the IAM role to use for the apply job. -# ApplyIAMRoleName: root-pipelines-apply - -# The default tags to apply to all resources. -# DefaultTags: -# "{{ .OrgNamePrefix }}:Team": "DevOps" - -# The version for terraform-aws-security module to use for OIDC provider and roles provisioning -# SecurityModulesVersion: v0.75.18 - -# The URL of the custom SCM provider instance. Set this if you are using a custom instance of GitLab. -# CustomSCMProviderInstanceURL: https://gitlab.example.io - -# The relative path from the host server to the custom pipelines workflow repository. Set this if you are using a custom/forked instance of the pipelines workflow. -# CustomWorkflowHostRelativePath: pipelines-workflows -``` - -#### Generate the repository contents - -1. Run the following command, from the root of your project, to generate the `infrastructure-live-root` repository contents: - - - ```bash - boilerplate --template-url "git@github.com:gruntwork-io/terraform-aws-architecture-catalog.git//templates/devops-foundations-infrastructure-live-root/?ref=main" --output-folder . --var-file vars.yaml --non-interactive - ``` - - This command adds all code required to set up your `infrastructure-live-root` repository. -1. Remove the boilerplate dependency from the `mise.toml` file. It is no longer needed. - -1. Commit your local changes and push them to the `bootstrap-repository` branch. - - ```bash - git add . - git commit -m "Bootstrap infrastructure-live-root repository initial commit [skip ci]" - git push origin bootstrap-repository - ``` - - Skipping the CI/CD process for now; you will manually apply the infrastructure baselines to your AWS accounts in a later step. - -1. Create a new merge request for the `bootstrap-repository` branch. Review the changes to understand what will be applied to your AWS accounts. The generated files fall under the following categories: - - - GitLab Pipelines workflow file - - Gruntwork Pipelines configuration files - - Module defaults files for infrastructure code - - Account baselines and GitLab OIDC module scaffolding files for your core AWS accounts: management, security, logs and shared. - -### Apply the account baselines to your AWS accounts - -You will manually `terragrunt apply` the generated infrastructure baselines to get your accounts bootstrapped **before** merging this content into your main branch. - -:::tip -You can utilize the AWS SSO Portal to obtain temporary AWS credentials necessary for subsequent steps: - -1. Sign in to the Portal page and select your preferred account to unveil the roles accessible to your SSO user. -1. Navigate to the "Access keys" tab adjacent to the "AWSAdministratorAccess" role. -1. Copy the "AWS environment variables" provided and paste them into your terminal for usage. -::: - - -1. [ ] Apply infrastructure changes in the **management** account - - 1. - [ ] Obtain AWS CLI Administrator credentials for the management account - - 1. - [ ] Navigate to the management account folder - - ```bash - cd management/ - ``` - - 1. - [ ] Using your credentials, run `terragrunt plan`. - - ```bash - terragrunt run --all plan --terragrunt-non-interactive - ``` - - 1. - [ ] After the plan succeeds, apply the changes: - - ```bash - terragrunt run --all apply --terragrunt-non-interactive - ``` - - 1. - [ ] After applying the changes, make sure to lock providers in your `.terraform.lock.hcl` files. The lock files will be committed in the final step of the setup. e.g. - - ```bash - terragrunt run --all providers -- lock -platform=darwin_amd64 -platform=linux_amd64 - ``` - - 1. - [ ] Update Permissions for Account Factory Portfolio - - The account factory pipeline _will fail_ until you grant the pipelines roles (`root-pipelines-plan` and `root-pipelines-apply`) access to the portfolio. This step **must be done after** you provision the pipelines roles in the management account (where control tower is set up). - - Access to the portfolio is separate from IAM access, it **must** be granted in the Service Catalog console. - - #### **Steps to grant access** - - To grant access to the Account Factory Portfolio, you **must** be an individual with Service Catalog administrative permissions. - - 1. Log into the management AWS account - 1. Go into the Service Catalog console - 1. Ensure you are in your default region(control-tower region) - 1. Select the **Portfolios** option in **Administration** from the left side navigation panel - 1. Click on the portfolio named **AWS Control Tower Account Factory Portfolio** - 1. Select the **Access** tab - 1. Click the **Grant access** button - 1. In the **Access type** section, leave the default value of **IAM Principal** - 1. Select the **Roles** tab in the lower section - 1. Enter `root-pipelines` into the search bar, there should be two results (`root-pipelines-plan` and `root-pipelines-apply`). Click the checkbox to the left of each role name. - 1. Click the **Grant access** button in the lower right hand corner - - 1. - [ ] Increase Account Quota Limit (OPTIONAL) - - Note that DevOps Foundations makes it very convenient, and therefore likely, that you will encounter one of the soft limits imposed by AWS on the number of accounts you can create. - - You may need to request a limit increase for the number of accounts you can create in the management account, as the default is currently 10 accounts. - - To request an increase to this limit, search for "Organizations" in the AWS management console [here](https://console.aws.amazon.com/servicequotas/home/dashboard) and request a limit increase to a value that makes sense for your organization. - -1. - [ ] Apply infrastructure changes in the **logs** account - - 1. - [ ] Obtain AWS CLI Administrator credentials for the logs account - 1. - [ ] Navigate to the logs account folder - - ```bash - cd ../logs/ - ``` - - 1. - [ ] Using your credentials, run `terragrunt plan`. - - ```bash - terragrunt run --all plan --terragrunt-non-interactive - ``` - - 1. - [ ] After the plan succeeds, apply the changes: - - ```bash - terragrunt run --all apply --terragrunt-non-interactive - ``` - - 1. - [ ] After applying the changes, make sure to lock providers in your `.terraform.lock.hcl` files. e.g. - - ```bash - terragrunt run --all providers lock -platform=darwin_amd64 -platform=linux_amd64 - ``` - -1. - [ ] Apply infrastructure changes in the **security** account - - 1. - [ ] Obtain AWS CLI Administrator credentials for the security account - 1. - [ ] Navigate to the security account folder - - ```bash - cd ../security/ - ``` - - 1. - [ ] Using your credentials, run `terragrunt plan`. - - ```bash - terragrunt run --all plan --terragrunt-non-interactive - ``` - - 1. - [ ] After the plan succeeds, apply the changes: - - ```bash - terragrunt run --all apply --terragrunt-non-interactive - ``` - - 1. - [ ] After applying the changes, make sure to lock providers in your `.terraform.lock.hcl` files. e.g. - - ```bash - terragrunt run --all providers lock -platform=darwin_amd64 -platform=linux_amd64 - ``` - -1. - [ ] Apply infrastructure changes in the **shared** account - - 1. - [ ] Obtain AWS CLI Administrator credentials for the shared account. You may need to grant your user access to the `AWSAdministratorAccess` permission set in the shared account from the management account's Identity Center Admin console. - 1. - [ ] Using your credentials, create a service role - - ```bash - aws iam create-service-linked-role --aws-service-name autoscaling.amazonaws.com - ``` - - 1. - [ ] Navigate to the shared account folder - - ```bash - cd ../shared/ - ``` - - 1. - [ ] Using your credentials, run `terragrunt plan`. - - ```bash - terragrunt run --all plan --terragrunt-non-interactive - ``` - - 1. - [ ] After the plan succeeds, apply the changes: - - ```bash - terragrunt run --all apply --terragrunt-non-interactive - ``` - - 1. - [ ] After applying the changes, make sure to lock providers in your `.terraform.lock.hcl` files. e.g. - - ```bash - terragrunt run --all providers lock -platform=darwin_amd64 -platform=linux_amd64 - ``` - -1. - [ ] Commit your local changes and push them to the `bootstrap-repository` branch. - - ```bash - cd .. - git add . - git commit -m "Bootstrap infrastructure-live-root repository final commit [skip ci]" - git push origin bootstrap-repository - ``` - -1. - [ ] Merge the open merge request. **Ensure [skip ci] is present in the commit message.** - - -## Create a new infrastructure-live-access-control (optional) - -### Create a new GitLab project - -1. Navigate to the group. -1. Click the **New Project** button. -1. Enter the name for the project as `infrastructure-live-access-control`. -1. Click **Create Project**. -1. Clone the project to your local machine. -1. Navigate to the project directory. -1. Create a new branch `bootstrap-repository`. - -### Install dependencies - -Run `mise install boilerplate@0.8.1` to install the boilerplate tool. - -### Bootstrap the repository - -#### Configure the variables required to run the boilerplate template - -Copy the content below to a `vars.yaml` file in the root of your project and update the customizable values as needed. - -```yaml title="vars.yaml" -SCMProvider: GitLab - -# The GitLab group to use for the infrastructure repositories. This should include any additional sub-groups in the name -# Example: acme/prod -SCMProviderGroup: $$GITLAB_GROUP_NAME$$ - -# The GitLab project to use for the infrastructure-live repository. -SCMProviderRepo: infrastructure-live-access-control - -# The name of the branch to deploy to. -# Example: main -DeployBranchName: $$DEPLOY_BRANCH_NAME$$ - -# The name prefix to use for creating resources e.g S3 bucket for OpenTofu state files -# Example: acme -OrgNamePrefix: $$ORG_NAME_PREFIX$$ - -# The default region for AWS Resources -# Example: us-east-1 -DefaultRegion: $$DEFAULT_REGION$$ - -################################################################################ -# OPTIONAL VARIABLES WITH THEIR DEFAULT VALUES. UNCOMMENT AND MODIFY IF NEEDED. -################################################################################ - -# The AWS partition to use. -# AWSPartition: aws -``` - -#### Generate the repository contents - -1. Run the following command, from the root of your project, to generate the `infrastructure-live-access-control` repository contents: - - - ```bash - boilerplate --template-url "git@github.com:gruntwork-io/terraform-aws-architecture-catalog.git//templates/devops-foundations-infrastructure-live-access-control/?ref=main" --output-folder . --var-file vars.yaml --non-interactive - ``` - - This command adds all code required to set up your `infrastructure-live-access-control` repository. The generated files fall under the following categories: - - - GitLab Pipelines workflow file - - Gruntwork Pipelines configuration files - - Module defaults files for GitLab OIDC roles and policies - - -2. Commit your local changes and push them to the `bootstrap-repository` branch. - - ```bash - git add . - git commit -m "Bootstrap infrastructure-live-access-control repository [skip ci]" - git push origin bootstrap-repository - ``` - - Skipping the CI/CD process now because there is no infrastructure to apply; repository simply contains the GitLab OIDC role module defaults to enable GitLab OIDC authentication from repositories other than `infrastructure-live-root`. - -3. Create a new merge request for the `bootstrap-repository` branch. Review the changes to understand the GitLab OIDC role module defaults. -4. Merge the open merge request. **Ensure [skip ci] is present in the commit message.** - -## Create a new infrastructure-catalog (optional) - -The `infrastructure-catalog` repository is a collection of modules that can be used to build your infrastructure. It is a great way to share modules with your team and across your organization. Learn more about the [Developer Self-Service](/2.0/docs/overview/concepts/developer-self-service) concept. - -### Create a new GitLab project - -1. Navigate to the group. -1. Click the **New Project** button. -1. Enter the name for the project as `infrastructure-catalog`. -1. Click **Create Project**. -1. Clone the project to your local machine. -1. Navigate to the project directory. -1. Create a new branch `bootstrap-repository`. - -### Install dependencies - -Run `mise install boilerplate@0.8.1` to install the boilerplate tool. - -### Bootstrap the repository - -#### Configure the variables required to run the boilerplate template - -Copy the content below to a `vars.yaml` file in the root of your project and update the customizable values as needed. - -```yaml title="vars.yaml" -# The name of the repository to use for the catalog. -InfraModulesRepoName: infrastructure-catalog - -# The version of the Gruntwork Service Catalog to use. https://github.com/gruntwork-io/terraform-aws-service-catalog -ServiceCatalogVersion: v0.111.2 - -# The version of the Gruntwork VPC module to use. https://github.com/gruntwork-io/terraform-aws-vpc -VpcVersion: v0.26.22 - -# The default region for AWS Resources -# Example: us-east-1 -DefaultRegion: $$DEFAULT_REGION$$ - -################################################################################ -# OPTIONAL VARIABLES WITH THEIR DEFAULT VALUES. UNCOMMENT AND MODIFY IF NEEDED. -################################################################################ - -# The base URL of the Organization to use for the catalog. -# If you are using Gruntwork's RepoCopier tool, this should be the base URL of the repository you are copying from. -# RepoBaseUrl: github.com/gruntwork-io - -# The name prefix to use for the Gruntwork RepoCopier copied repositories. -# Example: gruntwork-io- -# GWCopiedReposNamePrefix: -``` - - -#### Generate the repository contents - -1. Run the following command, from the root of your project, to generate the `infrastructure-catalog` repository contents: - - - ```bash - boilerplate --template-url "git@github.com:gruntwork-io/terraform-aws-architecture-catalog.git//templates/devops-foundations-infrastructure-modules/?ref=main" --output-folder . --var-file vars.yaml --non-interactive - ``` - - This command adds some code required to set up your `infrastructure-catalog` repository. The generated files are some usable modules for your infrastructure. - -1. Commit your local changes and push them to the `bootstrap-repository` branch. - - ```bash - git add . - git commit -m "Bootstrap infrastructure-catalog repository" - git push origin bootstrap-repository - ``` - -1. Create a new merge request for the `bootstrap-repository` branch. Review the changes to understand the example Service Catalog modules. -1. Merge the open merge request. diff --git a/docs/2.0/docs/pipelines/installation/addingnewrepo.mdx b/docs/2.0/docs/pipelines/installation/addingnewrepo.mdx new file mode 100644 index 0000000000..aff5927f5d --- /dev/null +++ b/docs/2.0/docs/pipelines/installation/addingnewrepo.mdx @@ -0,0 +1,969 @@ +# Bootstrap Pipelines in a New GitHub Repository + +import Tabs from '@theme/Tabs'; +import TabItem from '@theme/TabItem'; +import PersistentCheckbox from '/src/components/PersistentCheckbox'; + +To configure Gruntwork Pipelines in a new GitHub repository, complete the following steps (which are explained in detail below): + +1. Create an `infrastructure-live` repository. +2. Configure the Gruntwork.io GitHub App to authorize your `infrastructure-live` repository, or ensure that the appropriate machine user tokens are set up as repository or organization secrets. +3. Create `.gruntwork` HCL configurations to tell Pipelines how to authenticate in your environments. +4. Create `.github/workflows/pipelines.yml` to tell your GitHub Actions workflow how to run your pipelines. +5. Commit and push your changes to your repository. + +## Creating the infrastructure-live repository + +Creating an `infrastructure-live` repository is fairly straightforward. First, create a new repository using the official GitHub documentation for [creating repositories](https://docs.github.com/en/repositories/creating-and-managing-repositories/creating-a-new-repository). Name the repository something like `infrastructure-live` and make it private (or internal). + +## Configuring SCM Access + +Pipelines needs the ability to interact with Source Control Management (SCM) platforms to fetch resources (e.g. IaC code, reusable CI/CD code and the Pipelines binary itself). + +There are two ways to configure SCM access for Pipelines: + +1. Using the [Gruntwork.io GitHub App](/2.0/docs/pipelines/installation/viagithubapp#configuration) (recommended for most GitHub users). +2. Using a [machine user](/2.0/docs/pipelines/installation/viamachineusers) (recommended for GitHub users who cannot use the GitHub App). + +:::note Progress Checklist + + + +::: + +## Creating Cloud Resources for Pipelines + +To start using Pipelines, you'll need to ensure that requisite cloud resources are provisioned in your cloud provider(s) to start managing your infrastructure with Pipelines. + +:::note + +If you are using the [Gruntwork Account Factory](/2.0/docs/accountfactory/architecture), this will be done automatically during onboarding and in the process of [vending every new AWS account](/2.0/docs/accountfactory/guides/vend-aws-account), so you don't need to worry about this. + +::: + +Clone your `infrastructure-live` repository to your local machine using [Git](https://docs.github.com/en/repositories/creating-and-managing-repositories/cloning-a-repository). + +:::tip + +If you don't have Git installed, you can install it by following the official guide for [Git installation](https://git-scm.com/downloads). + +::: + +For example: + +```bash +git clone git@github.com:acme/infrastructure-live.git +cd infrastructure-live +``` + +:::note Progress Checklist + + + + +::: + +To bootstrap your `infrastructure-live` repository, we'll use Boilerplate to scaffold it with the necessary IaC code to provision the infrastructure necessary for Pipelines to function. + +The easiest way to install Boilerplate is to use `mise` to install it. + +:::tip + +If you don't have `mise` installed, you can install it by following the official guide for [mise installation](https://mise.jdx.dev/getting-started.html). + +::: + +```bash +mise use -g boilerplate@latest +``` + +:::tip + +If you'd rather install a specific version of Boilerplate, you can use the `ls-remote` command to list the available versions. + +```bash +mise ls-remote boilerplate +``` + +::: + +:::note Progress Checklist + + + +::: + +### Cloud-specific bootstrap instructions + + + + +The resources that you need provisioned in AWS to start managing resources with Pipelines are: + +1. An OpenID Connect (OIDC) provider +2. An IAM role for Pipelines to assume when running Terragrunt plan commands +3. An IAM role for Pipelines to assume when running Terragrunt apply commands + +For every account you want Pipelines to manage infrastructure in. + +:::tip Don't Panic! + +This may seem like a lot to set up, but the content you need to add to your `infrastructure-live` repository is minimal. The majority of the work will be pulled from a reusable catalog that you'll reference in your `infrastructure-live` repository. + +If you want to peruse the catalog that's used in the bootstrap process, you can take a look at the [terragrunt-scale-catalog](https://github.com/gruntwork-io/terragrunt-scale-catalog) repository. + +::: + +The process that we'll follow to get these resources ready for Pipelines is: + +1. Set up the Terragrunt configurations in your `infrastructure-live` repository for bootstrapping Pipelines in a single AWS account +2. Use Terragrunt to provision these resources in your AWS account +3. (Optionally) Bootstrap additional AWS accounts until all your AWS accounts are ready for Pipelines + +{/* We're using an h3 tag here instead of a markdown heading to avoid adding content to the ToC that won't work when switching between tabs */} +

Bootstrap your `infrastructure-live` repository

+ +To bootstrap your `infrastructure-live` repository, we'll use Boilerplate to scaffold it with the necessary content for Pipelines to function. + +```bash +boilerplate \ + --template-url 'github.com/gruntwork-io/terragrunt-scale-catalog//templates/boilerplate/aws/github/infrastructure-live?ref=v1.0.0' \ + --output-folder . +``` + +:::tip + +You can just reply `y` to all the prompts to include dependencies, and accept defaults unless you want to customize something. + +Alternatively, you could run Boilerplate non-interactively by passing the `--non-interactive` flag. You'll need to supply the relevant values for required variables in that case. + +e.g. + +```bash +boilerplate \ + --template-url 'github.com/gruntwork-io/terragrunt-scale-catalog//templates/boilerplate/aws/github/infrastructure-live?ref=v1.0.0' \ + --output-folder . \ + --var 'AccountName=dev' \ + --var 'GitHubOrgName=acme' \ + --var 'GitHubRepoName=infrastructure-live' \ + --var 'AWSAccountID=123456789012' \ + --var 'AWSRegion=us-east-1' \ + --var 'StateBucketName=my-state-bucket' \ + --non-interactive +``` + +You can also choose to store these values in a YAML file and pass it to Boilerplate using the `--var-file` flag. + +```yaml title="vars.yml" +AccountName: dev +GitHubOrgName: acme +GitHubRepoName: infrastructure-live +AWSAccountID: 123456789012 +AWSRegion: us-east-1 +StateBucketName: my-state-bucket +``` + +```bash +boilerplate \ + --template-url 'github.com/gruntwork-io/terragrunt-scale-catalog//templates/boilerplate/aws/github/infrastructure-live?ref=v1.0.0' \ + --output-folder . \ + --var-file vars.yml \ + --non-interactive +``` + +::: + +:::note Progress Checklist + + + +::: + +Next, install Terragrunt and OpenTofu locally (the `.mise.toml` file in the root of the repository after scaffolding should already be set to the versions you want for Terragrunt and OpenTofu): + +```bash +mise install +``` + +:::note Progress Checklist + + +::: + +{/* We're using an h3 tag here instead of a markdown heading to avoid adding content to the ToC that won't work when switching between tabs */} +

Provisioning the resources

+ +Once you've set up the Terragrunt configurations, you can use Terragrunt to provision the resources in your AWS account. + +:::tip + +Make sure that you're authenticated with AWS locally before proceeding. + +You can follow the documentation [here](https://search.opentofu.org/provider/hashicorp/aws/latest#authentication-and-configuration) to authenticate with the AWS provider. You are advised to choose an authentication method that doesn't require any hard-coded credentials, like assuming an IAM role. + +::: + +First, make sure that everything is set up correctly by running a plan in the `bootstrap` directory in `name-of-account/_global` where `name-of-account` is the name of the first AWS account you want to bootstrap. + +```bash title="name-of-account/_global/bootstrap" +terragrunt run --all --non-interactive --provider-cache plan +``` + +:::tip + +We're using the `--provider-cache` flag here to ensure that we don't re-download the AWS provider on every run by leveraging the [Terragrunt Provider Cache Server](https://terragrunt.gruntwork.io/docs/features/provider-cache-server/). + +::: + +:::note Progress Checklist + + + +::: + +Next, apply the changes to your account. + +```bash title="name-of-account/_global/bootstrap" +terragrunt run --all --non-interactive --provider-cache apply +``` + +:::note Progress Checklist + + + +::: + +{/* We're using an h3 tag here instead of a markdown heading to avoid adding content to the ToC that won't work when switching between tabs */} +

Optional: Bootstrapping additional AWS accounts

+ +If you have multiple AWS accounts, and you want to bootstrap them as well, you can do so by following a similar, but slightly condensed process. + +For each additional account you want to bootstrap, you'll use Boilerplate in the root of your `infrastructure-live` repository to scaffold out the necessary content for just that account. + +:::tip + +If you are going to bootstrap more AWS accounts, you'll probably want to commit your existing changes before proceeding. + +```bash +git add . +git commit -m "Add core Pipelines scaffolding [skip ci]" +``` + +The `[skip ci]` in the commit message is just in-case you push your changes up to your repository at this state, as you don't want to trigger Pipelines yet. + +::: + +Just like before, you'll use Boilerplate to scaffold out the necessary content for just that account. + +```bash +boilerplate \ + --template-url 'github.com/gruntwork-io/terragrunt-scale-catalog//templates/boilerplate/aws/github/infrastructure-live?ref=v1.0.0' \ + --output-folder . +``` + +:::tip + +Again, you can just reply `y` to all the prompts to include dependencies, and accept defaults unless you want to customize something. + +Alternatively, you could run Boilerplate non-interactively by passing the `--non-interactive` flag. You'll need to supply the relevant values for required variables in that case. + +e.g. + +```bash +boilerplate \ + --template-url 'github.com/gruntwork-io/terragrunt-scale-catalog//templates/boilerplate/aws/github/account?ref=v1.0.0' \ + --output-folder . \ + --var 'AccountName=prod' \ + --var 'AWSAccountID=987654321012' \ + --var 'AWSRegion=us-east-1' \ + --var 'StateBucketName=my-prod-state-bucket' \ + --var 'GitHubOrgName=acme' \ + --var 'GitHubRepoName=infrastructure-live' \ + --non-interactive +``` + +If you prefer to store the values in a YAML file and pass it to Boilerplate using the `--var-file` flag, you can do so like this: + +```yaml title="vars.yml" +AccountName: prod +AWSAccountID: 987654321012 +AWSRegion: us-east-1 +StateBucketName: my-prod-state-bucket +GitHubOrgName: acme +GitHubRepoName: infrastructure-live +``` + +```bash +boilerplate \ + --template-url 'github.com/gruntwork-io/terragrunt-scale-catalog//templates/boilerplate/aws/github/account?ref=v1.0.0' \ + --output-folder . \ + --var-file vars.yml \ + --non-interactive +``` + +::: + +:::note Progress Checklist + + + +::: + +Once you've scaffolded out the additional accounts you want to bootstrap, you can use Terragrunt to provision the resources in each of these accounts. + +:::tip + +Make sure that you authenticate to each AWS account you are bootstrapping using AWS credentials for that account before you attempt to provision resources in it. + +::: + +For each account you want to bootstrap, you'll need to run the following commands: + +```bash +cd /_global/bootstrap +terragrunt run --all --non-interactive --provider-cache plan +terragrunt run --all --non-interactive --provider-cache apply +``` + +:::note Progress Checklist + + + + +::: + +
+ + +The resources that you need provisioned in Azure to start managing resources with Pipelines are: + +1. An Azure Resource Group for OpenTofu state resources + 1. An Azure Storage Account in that resource group for OpenTofu state storage + 1. An Azure Storage Container in that storage account for OpenTofu state storage +2. An Entra ID Application to use for plans + 1. A Flexible Federated Identity Credential for the application to authenticate with your repository on any branch + 2. A Service Principal for the application to be used in role assignments + 1. A role assignment for the service principal to access the Azure subscription + 2. A role assignment for the service principal to access the Azure Storage Account +3. An Entra ID Application to use for applies + 1. A Federated Identity Credential for the application to authenticate with your repository on the deploy branch + 2. A Service Principal for the application to be used in role assignments + 1. A role assignment for the service principal to access the Azure subscription + +:::tip Don't Panic! + +This may seem like a lot to set up, but the content you need to add to your `infrastructure-live` repository is minimal. The majority of the work will be pulled from a reusable catalog that you'll reference in your `infrastructure-live` repository. + +If you want to peruse the catalog that's used in the bootstrap process, you can take a look at the [terragrunt-scale-catalog](https://github.com/gruntwork-io/terragrunt-scale-catalog) repository. + +::: + +The process that we'll follow to get these resources ready for Pipelines is: + +1. Set up these bootstrap resources by creating some Terragrunt configurations in your `infrastructure-live` repository for bootstrapping Pipelines in a single Azure subscription +2. Use Terragrunt to provision these resources in your Azure subscription +3. Finalizing Terragrunt configurations using the bootstrap resources we just provisioned +4. Pull the bootstrap resources into state, now that we have configured a remote state backend +5. (Optionally) Bootstrap additional Azure subscriptions until all your Azure subscriptions are ready for Pipelines + +{/* We're using an h3 tag here instead of a markdown heading to avoid adding content to the ToC that won't work when switching between tabs */} +

Bootstrap your `infrastructure-live` repository

+ +To bootstrap your `infrastructure-live` repository, we'll use Boilerplate to scaffold it with the necessary content for Pipelines to function. + +```bash +boilerplate \ + --template-url 'github.com/gruntwork-io/terragrunt-scale-catalog//templates/boilerplate/azure/github/infrastructure-live?ref=v1.0.0' \ + --output-folder . +``` + +:::tip + +You can just reply `y` to all the prompts to include dependencies, and accept defaults unless you want to customize something. + +Alternatively, you could run Boilerplate non-interactively by passing the `--non-interactive` flag. You'll need to supply the relevant values for required variables in that case. + +e.g. + +```bash +boilerplate \ + --template-url 'github.com/gruntwork-io/terragrunt-scale-catalog//templates/boilerplate/azure/github/infrastructure-live?ref=v1.0.0' \ + --output-folder . \ + --var 'AccountName=dev' \ + --var 'GitHubOrgName=acme' \ + --var 'GitHubRepoName=infrastructure-live' \ + --var 'SubscriptionName=dev' \ + --var 'AzureTenantID=00000000-0000-0000-0000-000000000000' \ + --var 'AzureSubscriptionID=11111111-1111-1111-1111-111111111111' \ + --var 'AzureLocation=East US' \ + --var 'StateResourceGroupName=pipelines-rg' \ + --var 'StateStorageAccountName=mysa' \ + --var 'StateStorageContainerName=tfstate' \ + --non-interactive +``` + +You can also choose to store these values in a YAML file and pass it to Boilerplate using the `--var-file` flag. + +```yaml title="vars.yml" +AccountName: dev +GitHubOrgName: acme +GitHubRepoName: infrastructure-live +AzureTenantID: 00000000-0000-0000-0000-000000000000 +AzureSubscriptionID: 11111111-1111-1111-1111-111111111111 +AzureLocation: East US +StateResourceGroupName: pipelines-rg +StateStorageAccountName: my-storage-account +StateStorageContainerName: tfstate +``` + +```bash +boilerplate \ + --template-url 'github.com/gruntwork-io/terragrunt-scale-catalog//templates/boilerplate/azure/github/infrastructure-live?ref=v1.0.0' \ + --output-folder . \ + --var-file vars.yml \ + --non-interactive +``` + +::: + +:::note Progress Checklist + +::: + +Next, install Terragrunt and OpenTofu locally (the `.mise.toml` file in the root of the repository after scaffolding should already be set to the versions you want for Terragrunt and OpenTofu): + +```bash +mise install +``` + +{/* We're using an h3 tag here instead of a markdown heading to avoid adding content to the ToC that won't work when switching between tabs */} +

Provisioning the resources

+ +Once you've set up the Terragrunt configurations, you can use Terragrunt to provision the resources in your Azure subscription. + +If you haven't already, you'll want to authenticate to Azure using the `az` CLI. + +```bash +az login +``` + +:::note Progress Checklist + + + +::: + + +To dynamically configure the Azure provider with a given tenant ID and subscription ID, ensure that you are exporting the following environment variables if you haven't the values via the `az` CLI: + +- `ARM_TENANT_ID` +- `ARM_SUBSCRIPTION_ID` + +For example: + +```bash +export ARM_TENANT_ID="00000000-0000-0000-0000-000000000000" +export ARM_SUBSCRIPTION_ID="11111111-1111-1111-1111-111111111111" +``` + +:::note Progress Checklist + + + +::: + +First, make sure that everything is set up correctly by running a plan in the subscription directory. + +```bash title="name-of-subscription" +terragrunt run --all --non-interactive --provider-cache plan +``` + +:::tip + +We're using the `--provider-cache` flag here to ensure that we don't re-download the Azure provider on every run to speed up the process. + +::: + +:::note Progress Checklist + + + +::: + +Next, apply the changes to your subscription. + +```bash title="name-of-subscription" +terragrunt run --all --non-interactive --provider-cache --no-stack-generate apply +``` + +:::tip + +We're adding the `--no-stack-generate` flag here, as Terragrunt will already have the requisite stack configurations generated, and we don't want to accidentally overwrite any configurations while we have state stored locally before we pull them into remote state. + +::: + +:::note Progress Checklist + + +::: + +{/* We're using an h3 tag here instead of a markdown heading to avoid adding content to the ToC that won't work when switching between tabs */} +

Finalizing Terragrunt configurations

+ +Once you've provisioned the resources in your Azure subscription, you can finalize the Terragrunt configurations using the bootstrap resources we just provisioned. + +First, edit the `root.hcl` file in the root of your `infrastructure-live` repository to leverage the storage account we just provisioned. + +```hcl title="root.hcl" +locals { + sub_hcl = read_terragrunt_config(find_in_parent_folders("sub.hcl")) + + state_resource_group_name = local.sub_hcl.locals.state_resource_group_name + state_storage_account_name = local.sub_hcl.locals.state_storage_account_name + state_storage_container_name = local.sub_hcl.locals.state_storage_container_name +} + +# FIXME: Uncomment the code below when you've successfully bootstrapped Pipelines state. +# +# remote_state { +# backend = "azurerm" +# generate = { +# path = "backend.tf" +# if_exists = "overwrite" +# } +# config = { +# resource_group_name = local.state_resource_group_name +# storage_account_name = local.state_storage_account_name +# container_name = local.state_storage_container_name +# key = "${path_relative_to_include()}/tofu.tfstate" +# } +# } + +generate "provider" { + path = "provider.tf" + if_exists = "overwrite_terragrunt" + contents = < + +::: + +Next, finalize the `.gruntwork/environment-.hcl` file in the root of your `infrastructure-live` repository to reference the IDs for the applications we just provisioned. + +```hcl title=".gruntwork/environment-.hcl" +environment "dev" { + filter { + paths = ["dev/*"] + } + + authentication { + azure_oidc { + tenant_id = "00000000-0000-0000-0000-000000000000" + subscription_id = "11111111-1111-1111-1111-111111111111" + + plan_client_id = "" # FIXME: Fill in the client ID for the plan application after bootstrapping + apply_client_id = "" # FIXME: Fill in the client ID for the apply application after bootstrapping + } + } +} +``` + +You can find the values for the `plan_client_id` and `apply_client_id` by running `terragrunt stack output` in the `bootstrap` directory in `name-of-subscription/bootstrap`. + +```bash +terragrunt stack output +``` + +The relevant bits that you want to extract from the stack output are the following: + +```hcl +bootstrap = { + apply_app = { + client_id = "33333333-3333-3333-3333-333333333333" + } + plan_app = { + client_id = "44444444-4444-4444-4444-444444444444" + } +} +``` + +You can use those values to set the values for `plan_client_id` and `apply_client_id` in the `.gruntwork/environment-.hcl` file. + +:::note Progress Checklist + + + + +::: + +{/* We're using an h3 tag here instead of a markdown heading to avoid adding content to the ToC that won't work when switching between tabs */} +

Pulling the resources into state

+ +Once you've provisioned the resources in your Azure subscription, you can pull the resources into state using the storage account we just provisioned. + +```bash title="name-of-subscription" +terragrunt run --all --non-interactive --provider-cache --no-stack-generate -- init -migrate-state -force-copy +``` + +:::tip + +We're adding the `-force-copy` flag here to avoid any issues with OpenTofu waiting for an interactive prompt to copy up local state. + +::: + +:::note Progress Checklist + + + +::: + +{/* We're using an h3 tag here instead of a markdown heading to avoid adding content to the ToC that won't work when switching between tabs */} +

Optional: Bootstrapping additional Azure subscriptions

+ +If you have multiple Azure subscriptions, and you want to bootstrap them as well, you can do so by following a similar, but slightly condensed process. + +For each additional subscription you want to bootstrap, you'll use Boilerplate in the root of your `infrastructure-live` repository to scaffold out the necessary content for just that subscription. + +:::tip + +If you are going to bootstrap more Azure subscriptions, you'll probably want to commit your existing changes before proceeding. + +```bash +git add . +git commit -m "Add additional Azure subscriptions [skip ci]" +``` + +::: + +Just like before, you'll use Boilerplate to scaffold out the necessary content for just that subscription. + +```bash +boilerplate \ + --template-url 'github.com/gruntwork-io/terragrunt-scale-catalog//templates/boilerplate/azure/github/subscription?ref=v1.0.0' \ + --output-folder . +``` + +:::tip + +Again, you can just reply `y` to all the prompts to include dependencies, and accept defaults unless you want to customize something. + +::: + +Alternatively, you could run Boilerplate non-interactively by passing the `--non-interactive` flag. You'll need to supply the relevant values for required variables in that case. + +e.g. + +```bash +boilerplate \ + --template-url 'github.com/gruntwork-io/terragrunt-scale-catalog//templates/boilerplate/azure/github/subscription?ref=v1.0.0' \ + --output-folder . \ + --var 'AccountName=prod' \ + --var 'GitHubOrgName=acme' \ + --var 'GitHubRepoName=infrastructure-live' \ + --var 'SubscriptionName=prod' \ + --var 'AzureTenantID=00000000-0000-0000-0000-000000000000' \ + --var 'AzureSubscriptionID=99999999-9999-9999-9999-999999999999' \ + --var 'AzureLocation=East US' \ + --var 'StateResourceGroupName=pipelines-rg' \ + --var 'StateStorageAccountName=myprodsa' \ + --var 'StateStorageContainerName=tfstate' \ + --non-interactive +``` + +If you prefer to store the values in a YAML file and pass it to Boilerplate using the `--var-file` flag, you can do so like this: + +```yaml title="vars.yml" +AccountName: prod +GitHubOrgName: acme +GitHubRepoName: infrastructure-live +SubscriptionName: prod +AzureTenantID: 00000000-0000-0000-0000-000000000000 +AzureSubscriptionID: 99999999-9999-9999-9999-999999999999 +AzureLocation: East US +StateResourceGroupName: pipelines-rg +StateStorageAccountName: myprodsa +StateStorageContainerName: tfstate +``` + +```bash +boilerplate \ + --template-url 'github.com/gruntwork-io/terragrunt-scale-catalog//templates/boilerplate/azure/github/subscription?ref=v1.0.0' \ + --output-folder . \ + --var-file vars.yml \ + --non-interactive +``` + +:::note Progress Checklist + + + +::: + +To avoid issues with the remote state backend not existing yet, you'll want to comment out your remote state backend configurations in your `root.hcl` file before you start the bootstrap process for these new subscriptions. + +```hcl title="root.hcl" +locals { + sub_hcl = read_terragrunt_config(find_in_parent_folders("sub.hcl")) + + state_resource_group_name = local.sub_hcl.locals.state_resource_group_name + state_storage_account_name = local.sub_hcl.locals.state_storage_account_name + state_storage_container_name = local.sub_hcl.locals.state_storage_container_name +} + +# FIXME: Temporarily commented out again, pending successful bootstrap of the new subscription(s). +# +# remote_state { +# backend = "azurerm" +# generate = { +# path = "backend.tf" +# if_exists = "overwrite" +# } +# config = { +# resource_group_name = local.state_resource_group_name +# storage_account_name = local.state_storage_account_name +# container_name = local.state_storage_container_name +# key = "${path_relative_to_include()}/tofu.tfstate" +# } +# } + +generate "provider" { + path = "provider.tf" + if_exists = "overwrite_terragrunt" + contents = < + +::: + +Just like before, you can use Terragrunt to provision the resources in each of these subscriptions. + +For each subscription you want to bootstrap, you'll need to run the following commands: + +```bash +cd /_global/bootstrap +terragrunt run --all --non-interactive --provider-cache plan +terragrunt run --all --non-interactive --provider-cache --no-stack-generate apply +``` + +:::tip + +We're adding the `--no-stack-generate` flag here, as Terragrunt will already have the requisite stack configurations generated, and we don't want to accidentally overwrite any configurations while we have state stored locally before we pull them into remote state. + +::: + +:::note Progress Checklist + + + + +::: + +Next, you can pull the resources into state using the storage account we just provisioned. + +First, edit the `root.hcl` file in the root of your `infrastructure-live` repository to uncomment the remote state backend configurations you commented out earlier. + +```hcl title="root.hcl" +locals { + sub_hcl = read_terragrunt_config(find_in_parent_folders("sub.hcl")) + + state_resource_group_name = local.sub_hcl.locals.state_resource_group_name + state_storage_account_name = local.sub_hcl.locals.state_storage_account_name + state_storage_container_name = local.sub_hcl.locals.state_storage_container_name +} + +remote_state { + backend = "azurerm" + generate = { + path = "backend.tf" + if_exists = "overwrite" + } + config = { + resource_group_name = local.state_resource_group_name + storage_account_name = local.state_storage_account_name + container_name = local.state_storage_container_name + key = "${path_relative_to_include()}/tofu.tfstate" + } +} + +generate "provider" { + path = "provider.tf" + if_exists = "overwrite_terragrunt" + contents = < + +::: + +Next, you can pull the resources into state using the storage account we just provisioned. + +```bash title="name-of-subscription" +terragrunt run --all --non-interactive --provider-cache --no-stack-generate -- init -migrate-state -force-copy +``` + +:::tip + +We're adding the `-force-copy` flag here to avoid any issues with OpenTofu waiting for an interactive prompt to copy up local state. + +::: + +:::note Progress Checklist + + + +::: + +Finally, we can edit each of the `.gruntwork/environment-.hcl` files in the root of your `infrastructure-live` repository to reference the IDs for the applications we just provisioned. + +```hcl title=".gruntwork/environment-.hcl" +environment "prod" { + filter { + paths = ["prod/*"] + } + + authentication { + azure_oidc { + tenant_id = "00000000-0000-0000-0000-000000000000" + subscription_id = "99999999-9999-9999-9999-999999999999" + + plan_client_id = "" # FIXME: Fill in the client ID for the plan application after bootstrapping + apply_client_id = "" # FIXME: Fill in the client ID for the apply application after bootstrapping + } + } +} +``` + +You can find the values for the `plan_client_id` and `apply_client_id` by running `terragrunt stack output` in the `bootstrap` directory in `name-of-subscription/bootstrap`. + +```bash +terragrunt stack output +``` + +The relevant bits that you want to extract from the stack output are the following: + +```hcl +bootstrap = { + apply_app = { + client_id = "55555555-5555-5555-5555-555555555555" + } + plan_app = { + client_id = "66666666-6666-6666-6666-666666666666" + } +} +``` + +You can use those values to set the values for `plan_client_id` and `apply_client_id` in the `.gruntwork/environment-.hcl` file. + +:::note Progress Checklist + + + + +::: + +
+
+ +## Commit and push your changes + +Commit and push your changes to your repository. + +:::note + +You should include `[skip ci]` in your commit message here to prevent triggering the Pipelines workflow. + +::: + +```bash +git add . +git commit -m "Add Pipelines GitHub Actions workflow [skip ci]" +git push +``` + +:::note Progress Checklist + + + + +::: + +🚀 You've successfully added Gruntwork Pipelines to your new repository! + +## Next steps + +You have successfully completed the installation of Gruntwork Pipelines in a new repository. Proceed to [Deploying your first infrastructure change](/2.0/docs/pipelines/tutorials/deploying-your-first-infrastructure-change.md) to begin deploying changes. diff --git a/docs/2.0/docs/pipelines/installation/authoverview.md b/docs/2.0/docs/pipelines/installation/authoverview.md index 6ed8a85d8d..b9023e00bc 100644 --- a/docs/2.0/docs/pipelines/installation/authoverview.md +++ b/docs/2.0/docs/pipelines/installation/authoverview.md @@ -1,25 +1,32 @@ -# Authenticating Gruntwork Pipelines +# SCM Authentication Overview -Gruntwork Pipelines requires authentication with GitHub/GitLab to perform various functions, including: -* Downloading Gruntwork code, such as the Pipelines binary and Terraform modules, from the `gruntwork-io` GitHub organization. -* Interacting with your repositories, such as: - * Creating pull requests. - * Commenting on pull requests. - * Creating new repositories via Account Factory. - * Updating repository settings, such as enforcing branch protection, via Account Factory. +Gruntwork Pipelines requires authentication with Source Control Management (SCM) platforms (e.g. GitHub, GitLab) for various reasons, including: -Gruntwork provides two authentication methods: a [GitHub App](/2.0/docs/pipelines/installation/viagithubapp.md) and CI Users ([Machine Users](/2.0/docs/pipelines/installation/viamachineusers.md)) with personal access tokens for Pipelines. +- Downloading Gruntwork software, such as the Pipelines binary and OpenTofu modules, from the `gruntwork-io` GitHub organization. +- Interacting with your repositories, such as: + - Creating pull requests. + - Commenting on pull requests. + - Creating new repositories via Account Factory. + - Updating repository settings, such as enforcing branch protection with Account Factory. -Both approaches support the core functionality of Pipelines. However, the GitHub App provides additional features and benefits, making it the recommended method. While Gruntwork strives to ensure feature parity between the two authentication mechanisms, certain advanced features are exclusive to the GitHub App, and this list is expected to grow over time. +Gruntwork provides two authentication methods: + +- [The Gruntwork.io GitHub App](/2.0/docs/pipelines/installation/viagithubapp.md) +- [CI Users (Machine Users)](/2.0/docs/pipelines/installation/viamachineusers) + +Both approaches support the core functionality of Pipelines. The GitHub App provides additional features and benefits, making it the recommended method for most customers that can use it. While Gruntwork strives to ensure feature parity between the two authentication mechanisms, certain advanced features are exclusive to the GitHub App, and this list is expected to grow over time. ## Summary of authentication mechanisms for GitHub **Advantages of the GitHub App**: + - Simplified setup process. - Access to enhanced features and functionality. - Improved user experience during regular operations. - Reduced maintenance, as there is no need to install, maintain, or rotate powerful tokens. **Advantages of Machine Users**: + - Compatibility with on-premises GitHub Enterprise installations that cannot interact with third-party servers (e.g., Gruntwork's backend). - Provides a fallback solution to ensure Pipelines continue functioning in the unlikely event of an outage affecting the Gruntwork-hosted backend that powers the GitHub App. +- Allows GitLab customers to download the Pipelines binary from GitLab CI Pipelines. diff --git a/docs/2.0/docs/pipelines/installation/branch-protection.md b/docs/2.0/docs/pipelines/installation/branch-protection.mdx similarity index 58% rename from docs/2.0/docs/pipelines/installation/branch-protection.md rename to docs/2.0/docs/pipelines/installation/branch-protection.mdx index c9dcda359e..399ca57d43 100644 --- a/docs/2.0/docs/pipelines/installation/branch-protection.md +++ b/docs/2.0/docs/pipelines/installation/branch-protection.mdx @@ -1,14 +1,8 @@ -# Branch Protection +# Adding Branch Protection to a GitHub Repository -Gruntwork Pipelines is designed to function within a PR-based workflow. Approving a pull request (PR) or merge request (MR) signals approval to deploy infrastructure, so it's important to configure repository settings and branch protection accurately. +Gruntwork Pipelines is designed to function within a pull request (PR) based workflow. Approving a pull request signals approval to deploy infrastructure, so it's important to configure repository settings and branch protection accurately. -## Recommended settings - -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; - - - +## GitHub Recommended Settings By default, Gruntwork Pipelines runs a `plan` on every push to a PR and an `apply` on every push to `main`. To ensure that infrastructure changes are reviewed and approved before deployment, branch protection should be enabled on `main` to prevent unauthorized changes. @@ -40,32 +34,13 @@ Below is an example of the recommended branch protection settings: GitHub Enterprise customers can also configure [push rulesets](https://docs.github.com/en/enterprise-cloud@latest/repositories/configuring-branches-and-merges-in-your-repository/managing-rulesets/about-rulesets#push-rulesets). This feature allows restricting edits to `.github/workflows` files, ensuring infrastructure changes are properly reviewed and approved through Pipelines. Follow the documentation [here](https://docs.github.com/en/enterprise-cloud@latest/repositories/configuring-branches-and-merges-in-your-repository/managing-rulesets/creating-rulesets-for-a-repository#creating-a-push-ruleset) to enable push rulesets if available. ::: - - - -## GitLab Recommended Settings - -For GitLab repositories, similar protection rules should be configured on the default branch (typically `main`). Navigate to `Settings > Repository > Protected branches` to configure the following settings: - -- Set the initial default branch to **Protected**. -- Set **Allowed to merge** to "Developers" or a specific group to control who can merge changes. -- Set **Allowed to push** to "No one" to prevent direct pushes to the protected branch. -- (Optional) Enable **Require approval from code owners** to ensure designated reviewers approve changes to specific files. - -Below is an example of the recommended GitLab branch protection settings: - -![GitLab Branch Protection Settings](/img/pipelines/gitlab_branch_protection.png) - - - - -## Merge Request/Pull Request Workflow +## Pull Request Workflow -1. Developers make infrastructure changes on a branch and create a PR (GitHub) or MR (GitLab) against the default branch. -2. On merge request/pull request creation, Gruntwork Pipelines runs `plan` for any changes and posts the results as a comment. +1. Developers make infrastructure changes on a branch and create a pull request (PR) against the default branch. +2. On pull request creation, Gruntwork Pipelines runs `plan` for any changes and posts the results as a comment. 3. Gruntwork Pipelines re-runs `plan` on every push to the branch and updates the results in a comment. 4. Gather approvals. If Code Owners is enabled, all relevant code owners must approve the changes. -5. Once approved, merge the merge request/pull request into the default branch. -6. Gruntwork Pipelines runs `apply` for the changes from the merge request/pull request. - - On success, the merge request/pull request is updated to indicate the successful `apply`. - - On failure, the merge request/pull request is updated to indicate the failure of the `apply`. If the failure cannot be resolved by retrying, a new merge request/pull request must be created to address the issues. +5. Once approved, merge the pull request into the default branch. +6. Gruntwork Pipelines runs `apply` for the changes from the pull request. + - On success, the pull request is updated to indicate the successful `apply`. + - On failure, the pull request is updated to indicate the failure of the `apply`. If the failure cannot be resolved by retrying, a new pull request must be created to address the issues. diff --git a/docs/2.0/docs/pipelines/installation/gitlab-branch-protection.md b/docs/2.0/docs/pipelines/installation/gitlab-branch-protection.md new file mode 100644 index 0000000000..51d936034e --- /dev/null +++ b/docs/2.0/docs/pipelines/installation/gitlab-branch-protection.md @@ -0,0 +1,27 @@ +# Adding Branch Protection to a GitLab Project + +Gruntwork Pipelines is designed to function within a merge request (MR) based workflow. Approving a merge request signals approval to deploy infrastructure, so it's important to configure repository settings and branch protection accurately. + +## GitLab Recommended Settings + +For GitLab repositories, similar protection rules should be configured on the default branch (typically `main`). Navigate to `Settings > Repository > Protected branches` to configure the following settings: + +- Set the initial default branch to **Protected**. +- Set **Allowed to merge** to "Developers" or a specific group to control who can merge changes. +- Set **Allowed to push** to "No one" to prevent direct pushes to the protected branch. +- (Optional) Enable **Require approval from code owners** to ensure designated reviewers approve changes to specific files. + +Below is an example of the recommended GitLab branch protection settings: + +![GitLab Branch Protection Settings](/img/pipelines/gitlab_branch_protection.png) + +## Merge Request Workflow + +1. Developers make infrastructure changes on a branch and create a merge request (MR) against the default branch. +2. On merge request creation, Gruntwork Pipelines runs `plan` for any changes and posts the results as a comment. +3. Gruntwork Pipelines re-runs `plan` on every push to the branch and updates the results in a comment. +4. Gather approvals. If Code Owners is enabled, all relevant code owners must approve the changes. +5. Once approved, merge the merge request into the default branch. +6. Gruntwork Pipelines runs `apply` for the changes from the merge request. + - On success, the merge request is updated to indicate the successful `apply`. + - On failure, the merge request is updated to indicate the failure of the `apply`. If the failure cannot be resolved by retrying, a new merge request must be created to address the issues. diff --git a/docs/2.0/docs/pipelines/installation/overview.md b/docs/2.0/docs/pipelines/installation/overview.md index 1a2320dc2b..aa37ab3978 100644 --- a/docs/2.0/docs/pipelines/installation/overview.md +++ b/docs/2.0/docs/pipelines/installation/overview.md @@ -2,9 +2,9 @@ Pipelines integrates multiple technologies to deliver a comprehensive CI/CD solution. This guide outlines the available installation methods and their respective use cases. -## Installation as part of DevOps Foundations +## Installation as part of Account Factory -Customers using DevOps Foundations benefit from a guided setup process that includes the complete installation of Gruntwork Pipelines. This process is facilitated by a Gruntwork solutions engineer and includes the following steps: +Customers using Account Factory benefit from a guided setup process that includes the complete installation of Gruntwork Pipelines. This process is facilitated by a Gruntwork solutions engineer and includes the following steps: 1. Creating a new `infrastructure-live-root` repository from the [`infrastructure-live-root-template`](https://github.com/gruntwork-io/infrastructure-live-root-template) template. 2. (On GitHub) Installing the [Gruntwork.io GitHub App](https://github.com/apps/gruntwork-io) on the `infrastructure-live-root` repository or across the entire organization. For detailed instructions, refer to [this guide](/2.0/docs/pipelines/installation/viagithubapp). @@ -12,11 +12,11 @@ Customers using DevOps Foundations benefit from a guided setup process that incl Completing these steps results in a repository fully configured for automated infrastructure deployments using GitOps workflows. -## Installation via manual setup +## Standalone Installation -For users not leveraging DevOps Foundations or needing Gruntwork Pipelines for a standalone repository with existing Terragrunt configurations, Gruntwork Pipelines can be installed as an independent GitHub Actions or GitLab pipelines workflow. +For users not leveraging Account Factory or needing Gruntwork Pipelines for a standalone repository with existing Terragrunt configurations, Gruntwork Pipelines can be installed as an independent GitHub Actions Workflow or GitLab CI Pipeline. -To learn more about this process, consult the documentation for [Adding Pipelines to an Existing Repository](/2.0/docs/pipelines/installation/addingexistingrepo). +To learn more about this process, consult the documentation for [Adding Pipelines to a New Repository](/2.0/docs/pipelines/installation/addingnewrepo) or [Adding Pipelines to an Existing Repository](/2.0/docs/pipelines/installation/addingexistingrepo). ## Platform differences @@ -29,15 +29,9 @@ For GitHub Actions, you have two authentication options: 1. [GitHub App Authentication](/2.0/docs/pipelines/installation/viagithubapp) (Recommended) 2. [Machine User Authentication](/2.0/docs/pipelines/installation/viamachineusers) -### GitLab CI/CD (Beta) +### GitLab CI/CD For GitLab CI/CD: 1. [Machine User Authentication](/2.0/docs/pipelines/installation/viamachineusers) is the only supported method -2. Contact Gruntwork support to authorize your GitLab groups - -:::note - - Account Factory features are not currently available on GitLab - - ::: \ No newline at end of file +2. Contact [Gruntwork support](/support) to authorize your GitLab groups diff --git a/docs/2.0/docs/pipelines/installation/viagithubapp.md b/docs/2.0/docs/pipelines/installation/viagithubapp.md index b2dd70375c..501f83dfcc 100644 --- a/docs/2.0/docs/pipelines/installation/viagithubapp.md +++ b/docs/2.0/docs/pipelines/installation/viagithubapp.md @@ -4,7 +4,7 @@ toc_min_heading_level: 2 toc_max_heading_level: 3 --- -# Pipelines Install via GitHub App +# Installing the Gruntwork.io GitHub App The [Gruntwork.io GitHub App](https://github.com/apps/gruntwork-io) is a [GitHub App](https://docs.github.com/en/apps/overview) introduced to help reduce the burden of integrating Gruntwork products to GitHub resources. The app is designed to be lightweight and flexible, providing a simple way to get started with Gruntwork products. @@ -13,6 +13,7 @@ The [Gruntwork.io GitHub App](https://github.com/apps/gruntwork-io) is a [GitHub At this time Gruntwork does not provide an app for GitLab, this page is only relevant for Gruntwork Pipelines users installing in GitHub. ::: + ## Overview There are three major components to keep in mind when working with the Gruntwork.io GitHub App: @@ -28,6 +29,7 @@ The Gruntwork.io GitHub App is the principal that Gruntwork products will utiliz #### Required Permissions As of 2024/09/10, the Gruntwork.io GitHub App requests the following permissions: + - **Read access to Actions**: Allows the app to read GitHub Actions artifacts. - **Write access to Administration**: Allows the app to create new repositories, and add teams as collaborators to repositories. - **Write access to Contents**: Allows the app to read and write repository contents. @@ -40,13 +42,15 @@ As of 2024/09/10, the Gruntwork.io GitHub App requests the following permissions Gruntwork.io requests all of these permissions because it requires them for different operations. Unfortunately, the way GitHub apps work prevents us from requesting permissions on a more granular basis. Know that the GitHub App Service will scope down its permissions whenever possible to the minimum required for the operation at hand. - The level of granularity available to customers when configuring the GitHub App installation is to either install the app on a per-repository basis or on an entire organization. Our recommendation is as follows: + The level of granularity available to customers when configuring the GitHub App installation is to either install the app on a per-repository basis or on an entire organization. Our recommendation is as follows for Account Factory customers: + + - For non-enterprise customers, allow the app for `infrastructure-live-root` repository and (if in-use) `infrastructure-live-access-control` and `infrastructure-catalog`. - * For non-enterprise customers, allow the app for `infrastructure-live-root` repository and (if in-use) `infrastructure-live-access-control` and `infrastructure-catalog`. - * For enterprise customers, allow the app to have access to the entire organization. + - For enterprise customers, allow the app to have access to the entire organization. -The reasoning for requiring entire-organization access for enterprise customers is that if you are using Account Factory to create delegated repositories then Account Factory will be creating, and then immediately modifying, new repositories in automated flows, which means it needs access to new repos as soon as they are created which is only possible with entire organization permission. + For non-Account Factory customers, we recommend installing the app on a per-repository basis. + The reasoning for requiring entire-organization access for enterprise customers is that if you are using Account Factory to create delegated repositories then Account Factory will be creating, and then immediately modifying, new repositories in automated flows, which means it needs access to new repos as soon as they are created which is only possible with entire organization permission. If you are unsure how to proceed here, reach out to Gruntwork Support for guidance. @@ -62,7 +66,7 @@ The reasoning for requiring entire-organization access for enterprise customers These permissions are used during the initial bootstrapping process when customers opt-in to additional repositories being created outside of the main `infrastructure-live-root` repository. - This is especially important for DevOps Foundations Enterprise customers, as those customers benefit from the ability to have `infrastructure-live-root` repositories create new repositories and add designated GitHub teams as collaborators via Infrastructure as Code (IaC). This is a critical feature for Enterprise customers who want to be able to scale their infrastructure management across multiple teams with delegated responsibility for segments of their IaC Estate. + This is especially important for Account Factory Enterprise customers, as those customers benefit from the ability to have `infrastructure-live-root` repositories create new repositories and add designated GitHub teams as collaborators via Infrastructure as Code (IaC). This is a critical feature for Enterprise customers who want to be able to scale their infrastructure management across multiple teams with delegated responsibility for segments of their IaC Estate.

Write access to Contents

@@ -108,7 +112,7 @@ The GitHub App Service is used by two major clients: 2. **Gruntwork Pipelines** - The main client for the Gruntwork.io App, and where most of the value is derived. Pipelines uses the GitHub App Service to acquire the relevant access for interacting with GitHub resources on behalf of the user. Access control rules are enforced here to ensure that only the level of access required, and explicitly specified in the Gruntwork Developer Portal can be used by Pipelines to interact with GitHub resources on behalf of the user. + The main client for the Gruntwork.io App, and where most of the value is derived. Pipelines uses the GitHub App Service to acquire the relevant access for interacting with GitHub resources on behalf of the user. Access control rules are enforced here to ensure that only the level of access required (and explicitly specified in the Gruntwork Developer Portal) can be used by Pipelines to interact with GitHub resources on behalf of the user. For example, while the Gruntwork.io GitHub App does have permissions to create new repositories, Pipelines will only do so if a workflow originating from a configured `infrastructure-live-root` repository requests it. @@ -118,7 +122,7 @@ The availability of the Gruntwork.io GitHub App is something Gruntwork will ende Any downtime of Gruntwork services will not impact the ability of your team to manage infrastructure using Gruntwork products. -#### App Only Features +### App Only Features The following features of the Gruntwork.io GitHub App will be unavailable during downtime: @@ -126,11 +130,11 @@ The following features of the Gruntwork.io GitHub App will be unavailable during - **Gruntwork Pipelines Comments**: While Pipelines will allow for IaC updates in a degraded state without the availability of the GitHub App, comments are a feature that rely on the availability of the app for the best experience. - **Gruntwork Pipelines Drift Detection**: Drift detection requires the availability of the GitHub App to function correctly. -#### Fallback +### Fallback -In order to ensure that the availability of the Gruntwork.io GitHub App is not something that can impair the ability of users to drive infrastructure updates, the legacy mechanism of authenticating with GitHub using [Machine users](/2.0/docs/pipelines/installation/viamachineusers.md) is still supported. +In order to ensure that the availability of the Gruntwork.io GitHub App is not something that can impair the ability of users to drive infrastructure updates, users can also authenticate with GitHub using [Machine users](/2.0/docs/pipelines/installation/viamachineusers). -Configuring the `PIPELINES_READ_TOKEN`, `INFRA_ROOT_WRITE_TOKEN` and `ORG_REPO_ADMIN_TOKEN` where necessary (following the documentation linked above) will result in Pipelines using the legacy mechanism to authenticate with GitHub, rather than the Gruntwork.io GitHub App. +Configuring the `PIPELINES_READ_TOKEN`, `INFRA_ROOT_WRITE_TOKEN` and `ORG_REPO_ADMIN_TOKEN` where necessary (following the documentation linked above) will result in Pipelines using the machine users mechanism to authenticate with GitHub, rather than the Gruntwork.io GitHub App. Using these fallback tokens will ensure that Pipelines can continue to perform operations like: @@ -160,9 +164,9 @@ To install the Gruntwork.io GitHub App in your organization follow these steps. ## Configuration -

Infrastructure Live Root Repositories

+### Infrastructure Live Root Repositories -DevOps Foundations treats certain repositories as especially privileged in order to perform critical operations like vending new AWS accounts and creating new repositories. These repositories are called "infrastructure live root repositories" and you can configure them in the [GitHub Account section](https://app.gruntwork.io/account?scroll_to=github-app) for your organization in the Gruntwork developer portal **if you are a designated administrator**. +Account Factory treats certain repositories as especially privileged in order to perform critical operations like vending new AWS accounts and creating new repositories. These repositories are called "infrastructure live root repositories" and you can configure them in the [GitHub Account section](https://app.gruntwork.io/account?scroll_to=github-app) for your organization in the Gruntwork developer portal **if you are a designated administrator**. ![Root Repository Configuration](/img/devops-foundations/github-app/root-repo-config.png) @@ -174,7 +178,7 @@ For more information, see the [relevant architecture documentation](/2.0/docs/pi ## Frequently Asked Questions -#### How do I find my Gruntwork.io GitHub App installation ID? +### How do I find my Gruntwork.io GitHub App installation ID? You can find the installation ID of the Gruntwork.io GitHub App in the URL of the installation page. diff --git a/docs/2.0/docs/pipelines/installation/viamachineusers.md b/docs/2.0/docs/pipelines/installation/viamachineusers.mdx similarity index 88% rename from docs/2.0/docs/pipelines/installation/viamachineusers.md rename to docs/2.0/docs/pipelines/installation/viamachineusers.mdx index 7e28d5de9a..45d6ead0a9 100644 --- a/docs/2.0/docs/pipelines/installation/viamachineusers.md +++ b/docs/2.0/docs/pipelines/installation/viamachineusers.mdx @@ -3,12 +3,14 @@ toc_min_heading_level: 2 toc_max_heading_level: 4 --- -# Setting up Pipelines via GitHub Machine Users + +# Creating Machine Users + import PersistentCheckbox from '/src/components/PersistentCheckbox'; import Tabs from "@theme/Tabs" import TabItem from "@theme/TabItem" -For GitHub users, of the [two methods](/2.0/docs/pipelines/installation/authoverview.md) for installing Gruntwork Pipelines, we strongly recommend using the [GitHub App](/2.0/docs/pipelines/installation/viagithubapp.md). However, if the GitHub App cannot be used or if machine users are required as a [fallback](http://localhost:3000/2.0/docs/pipelines/installation/viagithubapp#fallback), this guide outlines how to set up authentication for Pipelines using access tokens and machine users. +For GitHub users, of the [two methods](/2.0/docs/pipelines/installation/authoverview.md) for installing Gruntwork Pipelines, we strongly recommend using the [GitHub App](/2.0/docs/pipelines/installation/viagithubapp.md). However, if the GitHub App cannot be used or if machine users are required as a [fallback](/2.0/docs/pipelines/installation/viagithubapp#fallback), this guide outlines how to set up authentication for Pipelines using access tokens and machine users. For GitHub or GitLab users, when using tokens, Gruntwork recommends setting up CI users specifically for Gruntwork Pipelines, separate from human users in your organization. This separation ensures workflows are not disrupted if an employee leaves the company and allows for more precise permission management. Additionally, using CI users allow you to apply granular permissions that may normally be too restrictive for a normal employee to do their daily work. @@ -19,6 +21,7 @@ This guide will take approximately 30 minutes to complete. ::: ## Background + ### Guidance on storing secrets During this process, you will generate and securely store several access tokens. Use a temporary but secure location for these sensitive values between generating them and storing them in GitHub or GitLab. Follow your organization's security best practices and avoid insecure methods (e.g., Slack or sticky notes) during this exercise. @@ -87,16 +90,17 @@ GitLab uses access tokens for authentication. There are several types of access For Pipelines, we recommend using Project or Group Access Tokens. -Note that Project and Group access tokens are only available in certain GitLab licenses. Specifically: +Note that Project and Group access tokens are only available in certain GitLab licenses. Specifically: [Project Access Tokens](https://docs.gitlab.com/user/project/settings/project_access_tokens/#token-availability) -* On GitLab SaaS: If you have the Premium or Ultimate license tier, only one project access token is available with a [trial license](https://about.gitlab.com/free-trial/). -* On GitLab Self-Managed instances: With any license tier. If you have the Free tier, consider [restricting the creation of project access tokens](https://docs.gitlab.com/user/project/settings/project_access_tokens/#restrict-the-creation-of-project-access-tokens) to lower potential abuse. + +- On GitLab SaaS: If you have the Premium or Ultimate license tier, only one project access token is available with a [trial license](https://about.gitlab.com/free-trial/). +- On GitLab Self-Managed instances: With any license tier. If you have the Free tier, consider [restricting the creation of project access tokens](https://docs.gitlab.com/user/project/settings/project_access_tokens/#restrict-the-creation-of-project-access-tokens) to lower potential abuse. [Group Access Tokens](https://docs.gitlab.com/user/group/settings/group_access_tokens/) -* On GitLab.com, you can use group access tokens if you have the Premium or Ultimate license tier. Group access tokens are not available with a [trial license](https://about.gitlab.com/free-trial/). -* On GitLab Dedicated and self-managed instances, you can use group access tokens with any license tier. +- On GitLab.com, you can use group access tokens if you have the Premium or Ultimate license tier. Group access tokens are not available with a [trial license](https://about.gitlab.com/free-trial/). +- On GitLab Dedicated and self-managed instances, you can use group access tokens with any license tier.
@@ -116,7 +120,7 @@ Both the `ci-user` and the `ci-read-only-user` must: 1. Be members of your GitHub Organization. -2. Be added to your team in **Gruntwork**’s GitHub Organization (See [instructions on inviting a user to your team](https://docs.gruntwork.io/developer-portal/invite-team#inviting-team-members) and [linking the user’s GitHub ID to Gruntwork](https://docs.gruntwork.io/developer-portal/link-github-id)). +2. Be added to your team in **Gruntwork**’s GitHub Organization (See [instructions on inviting a user to your team](https://docs.gruntwork.io/developer-portal/invite-team#inviting-team-members) and [linking the user’s GitHub ID to Gruntwork](https://docs.gruntwork.io/developer-portal/link-github-id)). :::tip We recommend creating two machine users for better access control, but you may adjust this setup to fit your organization’s needs. Ensure permissions are appropriate for their roles, and note that additional GitHub licenses may be required if at capacity. @@ -141,6 +145,7 @@ Ensure the `ci-user` has write access to your: - `infrastructure-live-access-control` repository **Checklist:** + **Create access tokens for the `ci-user`** @@ -148,13 +153,13 @@ Ensure the `ci-user` has write access to your: Generate the required tokens for the ci-user in their GitHub account. **Checklist:** + - #### INFRA_ROOT_WRITE_TOKEN -This [fine-grained](#fine-grained) Personal Access Token allows GitHub Actions to clone `infrastructure-live-root`, open pull requests, and update comments. +This [fine-grained](#fine-grained-tokens) Personal Access Token allows GitHub Actions to clone `infrastructure-live-root`, open pull requests, and update comments. This token must have the following permissions to the `INFRA_ROOT_WRITE_TOKEN` for the `infrastructure-live-root` repository: @@ -175,18 +180,23 @@ Below is a detailed breakdown of the permissions needed for the `INFRA_ROOT_WRIT If you are not an Enterprise customer or prefer Pipelines not to execute certain behaviors, you can opt not to grant the related permissions. ##### Content read & write access + Needed for cloning `infrastructure-live-root` and pushing automated changes. Without this permission, the pull request opened by the GitHub Actions workflow will not trigger automation during account vending. ##### Issues read & write access + Allows Pipelines to open issues that alert teams when manual action is required. ##### Metadata read access + Grants visibility into repository metadata. ##### Pull requests read & write access + Allows Pipelines to create pull requests to introduce infrastructure changes. ##### Workflows read & write access + Required to update workflows when provisioning new repositories. @@ -215,42 +225,47 @@ The following is a breakdown of the permissions needed for the `ORG_REPO_ADMIN_T If you are not an Enterprise customer or prefer Pipelines not to carry out certain actions, you can choose to withhold the related permissions. ##### Administration read & write access + Allows the creation of new repositories for delegated infrastructure management. ##### Content read & write access + Used for bootstrapping repositories and populating them with necessary content. ##### Metadata read access + Grants repository-level insights needed for automation. ##### Pull requests read & write access - This is required to open pull requests. When vending delegated repositories for Enterprise customers, Pipelines will open pull requests to automate the process of introducing new Infrastructure as Code changes to drive infrastructure updates. + +This is required to open pull requests. When vending delegated repositories for Enterprise customers, Pipelines will open pull requests to automate the process of introducing new Infrastructure as Code changes to drive infrastructure updates. ##### Workflows read & write access - This is required to update GitHub Action workflow files. When vending delegated repositories for Enterprise customers, Pipelines will create new repositories, including content in the `.github/workflows` directory. Without this permission, Pipelines would not be able to provision repositories with this content. + +This is required to update GitHub Action workflow files. When vending delegated repositories for Enterprise customers, Pipelines will create new repositories, including content in the `.github/workflows` directory. Without this permission, Pipelines would not be able to provision repositories with this content. ##### Members read & write access - Required to update GitHub organization team members. When vending delegated repositories for Enterprise customers, Pipelines will add team members to a team that has access to a delegated repository. Without this permission, Pipelines would not be able to provision repositories that are accessible to the correct team members. +Required to update GitHub organization team members. When vending delegated repositories for Enterprise customers, Pipelines will add team members to a team that has access to a delegated repository. Without this permission, Pipelines would not be able to provision repositories that are accessible to the correct team members. - :::tip -If you are not an Enterprise customer, you should delete it after DevOps Foundations setup. +If you are not an Enterprise customer, you should delete it after Account Factory onboarding. ::: ### ci-read-only-user The `ci-read-only-user` is configured to download private software within GitHub Actions workflows. This user is responsible for accessing Gruntwork IaC Library modules, your infrastructure-modules repository, other private custom module repositories, and the Pipelines CLI. -This user should use a single classic Personal Access Token (PAT)(https://docs.github.com/en/authentication/keeping-your-account-and-data-secure/managing-your-personal-access-tokens#personal-access-tokens-classic) with read-only permissions. Since classic PATs offer coarse grained access controls, it’s recommended to assign this user to a GitHub team with READ access limited to the `infrastructure-live-root` repository and any relevant module repositories within your GitHub Organization. Adding this user to the Gruntwork Developer Portal will automatically grant access to the Gruntwork IaC Library. +This user should use a single classic [Personal Access Token (PAT)](https://docs.github.com/en/authentication/keeping-your-account-and-data-secure/managing-your-personal-access-tokens#personal-access-tokens-classic) with read-only permissions. Since classic PATs offer coarse grained access controls, it’s recommended to assign this user to a GitHub team with READ access limited to the `infrastructure-live-root` repository and any relevant module repositories within your GitHub Organization. Adding this user to the Gruntwork Developer Portal will automatically grant access to the Gruntwork IaC Library. **Invite ci-read-only-user to your repository** Invite `ci-user-read-only` to your `infrastructure-live-root` repository with read access. **Checklist:** + **Create a token for ci-read-only-user** @@ -260,8 +275,6 @@ Generate the following token for the `ci-read-only-user`: **Checklist:** - - #### PIPELINES_READ_TOKEN This [Classic Personal Access Token](#classic-tokens) manages access to private software during GitHub Action runs. @@ -275,6 +288,7 @@ This token must have `repo` scopes. Gruntwork recommends setting expiration to 9 Make sure both machine users are added to your team in Gruntwork’s GitHub Organization. Refer to the [instructions for inviting a user to your team](https://docs.gruntwork.io/developer-portal/invite-team#inviting-team-members) and [linking the user’s GitHub ID to Gruntwork](https://docs.gruntwork.io/developer-portal/link-github-id) for guidance. **Checklist:** + ## Configure secrets for GitHub Actions @@ -287,11 +301,14 @@ Since this guide uses secrets scoped to specific repositories, the token permiss + **Checklist:** +
+ 1. Navigate to your top-level GitHub Organization and select the **Settings** tab. 2. From the navigation bar on the left side, choose **Secrets and variables**, then select **Actions**. @@ -345,13 +362,16 @@ For more details on creating and using GitHub Actions Organization secrets, refe
+ **Checklist:** +
+ Gruntwork Pipelines retrieves these secrets from GitHub Actions secrets configured in the repository. For instructions on creating repository Actions secrets, refer to [creating secrets for a repository](https://docs.github.com/en/actions/security-guides/using-secrets-in-github-actions#creating-secrets-for-a-repository). ### `infrastructure-live-root` @@ -378,8 +398,8 @@ If you are **not an Enterprise customer**, you should also do the following: - Delete the `ORG_REPO_ADMIN_TOKEN` Personal Access Token from the `ci-user`’s GitHub account. - Remove the `ORG_REPO_ADMIN_TOKEN` Repository secret from the `infrastructure-live-root` repository. -::: +::: :::info For more information on creating and using GitHub Actions Repository secrets, refer to the [GitHub Documentation](https://docs.github.com/en/actions/security-guides/using-secrets-in-github-actions#creating-secrets-for-a-repository). @@ -391,13 +411,12 @@ For more information on creating and using GitHub Actions Repository secrets, re
- -For GitLab, Gruntwork Pipelines two CI variables. The first, the `PIPELINES_GITLAB_TOKEN` requires the `Developer`, `Maintainer` or `Owner` role and the scopes listed below. This token will be used to authenticate API calls and access repositories within your GitLab group. The second, the `PIPELINES_GITLAB_READ_TOKEN` will be used to access your own code within GitLab. If not set, Pipelines will default to the `CI_JOB_TOKEN` when accessing internal GitLab hosted code. - +For GitLab, Gruntwork Pipelines two CI variables. The first, the `PIPELINES_GITLAB_TOKEN` requires the `Developer`, `Maintainer` or `Owner` role and the scopes listed below. This token will be used to authenticate API calls and access repositories within your GitLab group. The second, the `PIPELINES_GITLAB_READ_TOKEN` will be used to access your own code within GitLab. If not set, Pipelines will default to the `CI_JOB_TOKEN` when accessing internal GitLab hosted code. ### Creating the Access Token Gruntwork recommends [creating](https://docs.gitlab.com/user/project/settings/project_access_tokens/#create-a-project-access-token) two Project or Group Access Tokens as best practice: + | Token Name | Required Scopes | Required Role | Purpose | | ------------------------------- | -------------------------------------------- | ------------------------------- | ---------------------------------------------------------------------------- | | **PIPELINES_GITLAB_TOKEN** | `api` (and `ai_features` if using GitLab AI) | Developer, Maintainer, or Owner | Making API calls (e.g., creating comments on merge requests) | @@ -417,6 +436,7 @@ Set an expiration date according to your organization's security policies. We re ::: **Checklist:** + @@ -434,6 +454,7 @@ Add the `PIPELINES_GITLAB_TOKEN` and `PIPELINES_GITLAB_READ_TOKEN` as CI/CD vari 8. Set the value as the Personal Access Token generated in the [Creating the Access Token](#creating-the-access-token) section **Checklist:** + diff --git a/sidebars/docs.js b/sidebars/docs.js index 0de3c67e1f..311f531900 100644 --- a/sidebars/docs.js +++ b/sidebars/docs.js @@ -225,34 +225,22 @@ const sidebar = [ id: "2.0/docs/pipelines/installation/scm-comparison", }, { - label: "Prerequisites", type: "category", + label: "Set up SCM Authentication", collapsed: false, items: [ { - label: "AWS Landing Zone", - type: "doc", - id: "2.0/docs/pipelines/installation/prerequisites/awslandingzone", - }, - ], - }, - { - type: "category", - label: "Enable Auth for Pipelines", - collapsed: false, - items: [ - { - label: "Auth Overview", + label: "Overview", type: "doc", id: "2.0/docs/pipelines/installation/authoverview", }, { - label: "Auth via GitHub App", + label: "GitHub App", type: "doc", id: "2.0/docs/pipelines/installation/viagithubapp", }, { - label: "Auth via Machine Users", + label: "Machine Users", type: "doc", id: "2.0/docs/pipelines/installation/viamachineusers", }, @@ -269,17 +257,17 @@ const sidebar = [ collapsed: false, items: [ { - label: "Creating a New GitHub Repository with Pipelines", + label: "Bootstrap Pipelines in a New GitHub Repository", type: "doc", id: "2.0/docs/pipelines/installation/addingnewrepo", }, { - label: "Adding Pipelines to an Existing GitHub Repository", + label: "Bootstrap Pipelines in an Existing GitHub Repository", type: "doc", id: "2.0/docs/pipelines/installation/addingexistingrepo", }, { - label: "Adding Branch Protection to a Repository", + label: "Adding Branch Protection to a GitHub Repository", type: "doc", id: "2.0/docs/pipelines/installation/branch-protection", }, @@ -291,14 +279,19 @@ const sidebar = [ collapsed: false, items: [ { - label: "Creating a New GitLab Project with Pipelines", + label: "Bootstrap Pipelines in a new GitLab Project", type: "doc", - id: "2.0/docs/pipelines/installation/addingnewgitlabrepo", + id: "2.0/docs/pipelines/installation/addinggitlabrepo", }, { - label: "Adding Pipelines to an Existing GitLab Project", + label: "Bootstrap Pipelines in an Existing GitLab Project", type: "doc", - id: "2.0/docs/pipelines/installation/addinggitlabrepo", + id: "2.0/docs/pipelines/installation/addingexistinggitlabrepo", + }, + { + label: "Adding Branch Protection to a GitLab Project", + type: "doc", + id: "2.0/docs/pipelines/installation/gitlab-branch-protection", }, ], }, @@ -487,10 +480,33 @@ const sidebar = [ }, ], }, + { + label: "Prerequisites", + type: "category", + collapsed: false, + items: [ + { + label: "AWS Landing Zone", + type: "doc", + id: "2.0/docs/accountfactory/prerequisites/awslandingzone", + }, + ], + }, { label: "Setup & Installation", - type: "doc", - id: "2.0/docs/accountfactory/installation/index", + type: "category", + collapsed: true, + link: { + type: "doc", + id: "2.0/docs/accountfactory/installation/index", + }, + items: [ + { + label: "Adding Account Factory to a new repository", + type: "doc", + id: "2.0/docs/accountfactory/installation/addingnewrepo", + }, + ], }, { label: "Guides", diff --git a/src/components/.d.ts b/src/components/.d.ts index 2ab174fdbc..90055ebcde 100644 --- a/src/components/.d.ts +++ b/src/components/.d.ts @@ -1 +1,5 @@ declare module "*.module.css" +declare module "!!raw-loader!*" { + const content: string; + export default content; +} diff --git a/src/components/pipelines/CloudSpecificBootstra.tsx b/src/components/pipelines/CloudSpecificBootstra.tsx new file mode 100644 index 0000000000..b8f6e69005 --- /dev/null +++ b/src/components/pipelines/CloudSpecificBootstra.tsx @@ -0,0 +1,487 @@ +import React from "react"; +import Tabs from '@theme/Tabs'; +import TabItem from '@theme/TabItem'; +import Admonition from '@theme/Admonition'; +import CodeBlock from '@theme/CodeBlock'; +import PersistentCheckbox from '../PersistentCheckbox'; + +export const CloudSpecificBootstrap = () => { + return ( + <> + + + +The resources you need provisioned in AWS to start managing resources with Pipelines are: + +1. An OpenID Connect (OIDC) provider +2. An IAM role for Pipelines to assume when running Terragrunt plan commands +3. An IAM role for Pipelines to assume when running Terragrunt apply commands + +For every account you want Pipelines to manage infrastructure in. + +:::tip Don't Panic! + +This may seem like a lot to set up, but the content you need to add to your project is minimal. The majority of the work will be pulled from a reusable catalog that you'll reference in your project. + +If you want to peruse the catalog that's used in the bootstrap process, you can take a look at the [terragrunt-scale-catalog](https://github.com/gruntwork-io/terragrunt-scale-catalog) repository. + +::: + +The process that we'll follow to get these resources ready for Pipelines is: + +1. Use Boilerplate to scaffold bootstrap configurations in your project for each AWS account +2. Use Terragrunt to provision these resources in your AWS accounts +3. (Optionally) Bootstrap additional AWS accounts until all your AWS accounts are ready for Pipelines + +{/* We're using an h3 tag here instead of a markdown heading to avoid adding content to the ToC that won't work when switching between tabs */} +

Bootstrap Your Project for AWS

+ +First, confirm that you have a `root.hcl` file in the root of your project that looks something like this: + +```hcl title="root.hcl" +locals { + account_hcl = read_terragrunt_config(find_in_parent_folders("account.hcl")) + state_bucket_name = local.account_hcl.locals.state_bucket_name + + region_hcl = read_terragrunt_config(find_in_parent_folders("region.hcl")) + aws_region = local.region_hcl.locals.aws_region +} + +remote_state { + backend = "s3" + generate = { + path = "backend.tf" + if_exists = "overwrite" + } + config = { + bucket = local.state_bucket_name + region = local.aws_region + key = "${path_relative_to_include()}/tofu.tfstate" + encrypt = true + use_lockfile = true + } +} + +generate "provider" { + path = "provider.tf" + if_exists = "overwrite_terragrunt" + contents = < + +::: + +{/* We're using an h3 tag here instead of a markdown heading to avoid adding content to the ToC that won't work when switching between tabs */} +

Provision AWS Bootstrap Resources

+ +Once you've scaffolded out the accounts you want to bootstrap, you can use Terragrunt to provision the resources in each of these accounts. + +:::tip + +Make sure that you authenticate to each AWS account you are bootstrapping using AWS credentials for that account before you attempt to provision resources in it. + +You can follow the documentation [here](https://search.opentofu.org/provider/hashicorp/aws/latest#authentication-and-configuration) to authenticate with the AWS provider. You are advised to choose an authentication method that doesn't require any hard-coded credentials, like assuming an IAM role. + +::: + +For each account you want to bootstrap, you'll need to run the following commands: + +First, make sure that everything is set up correctly by running a plan in the `bootstrap` directory in `name-of-account/_global` where `name-of-account` is the name of the AWS account you want to bootstrap. + +```bash title="name-of-account/_global/bootstrap" +terragrunt run --all --non-interactive --provider-cache plan +``` + +:::tip + +We're using the `--provider-cache` flag here to ensure that we don't re-download the AWS provider on every run by leveraging the [Terragrunt Provider Cache Server](https://terragrunt.gruntwork.io/docs/features/provider-cache-server/). + +::: + +Next, apply the changes to your account. + +```bash title="name-of-account/_global/bootstrap" +terragrunt run --all --non-interactive --provider-cache apply +``` + +:::note Progress Checklist + + + + +::: + +
+ + +The resources you need provisioned in Azure to start managing resources with Pipelines are: + +1. An Azure Resource Group for OpenTofu state resources + 1. An Azure Storage Account in that resource group for OpenTofu state storage + 1. An Azure Storage Container in that storage account for OpenTofu state storage +2. An Entra ID Application to use for plans + 1. A Flexible Federated Identity Credential for the application to authenticate with your project on any branch + 2. A Service Principal for the application to be used in role assignments + 1. A role assignment for the service principal to access the Azure subscription + 2. A role assignment for the service principal to access the Azure Storage Account +3. An Entra ID Application to use for applies + 1. A Federated Identity Credential for the application to authenticate with your project on the deploy branch + 2. A Service Principal for the application to be used in role assignments + 1. A role assignment for the service principal to access the Azure subscription + +:::tip Don't Panic! + +This may seem like a lot to set up, but the content you need to add to your project is minimal. The majority of the work will be pulled from a reusable catalog that you'll reference in your project. + +If you want to peruse the catalog that's used in the bootstrap process, you can take a look at the [terragrunt-scale-catalog](https://github.com/gruntwork-io/terragrunt-scale-catalog) repository. + +::: + +The process that we'll follow to get these resources ready for Pipelines is: + +1. Use Boilerplate to scaffold bootstrap configurations in your project for each Azure subscription +2. Use Terragrunt to provision these resources in your Azure subscription +3. Finalizing Terragrunt configurations using the bootstrap resources we just provisioned +4. Pull the bootstrap resources into state, now that we have configured a remote state backend +5. (Optionally) Bootstrap additional Azure subscriptions until all your Azure subscriptions are ready for Pipelines + +{/* We're using an h3 tag here instead of a markdown heading to avoid adding content to the ToC that won't work when switching between tabs */} +

Bootstrap Your Project for Azure

+ +For each Azure subscription that needs bootstrapping, we'll use Boilerplate to scaffold the necessary content. Run this command from the root of your project for each subscription: + +```bash +boilerplate \ + --template-url 'github.com/gruntwork-io/terragrunt-scale-catalog//templates/boilerplate/azure/gitlab/subscription?ref=v1.0.0' \ + --output-folder . +``` + +:::tip + +You'll need to run this boilerplate command once for each Azure subscription you want to manage with Pipelines. Boilerplate will prompt you for subscription-specific values each time. + +::: + +:::tip + +You can reply `y` to all the prompts to include dependencies, and accept defaults unless you want to customize something. + +Alternatively, you could run Boilerplate non-interactively by passing the `--non-interactive` flag. You'll need to supply the relevant values for required variables in that case. + +e.g. + +```bash +boilerplate \ + --template-url 'github.com/gruntwork-io/terragrunt-scale-catalog//templates/boilerplate/azure/gitlab/subscription?ref=v1.0.0' \ + --output-folder . \ + --var 'AccountName=dev' \ + --var 'GitLabGroupName=acme' \ + --var 'GitLabRepoName=infrastructure-live' \ + --var 'GitLabInstanceURL=https://gitlab.com' \ + --var 'SubscriptionName=dev' \ + --var 'AzureTenantID=00000000-0000-0000-0000-000000000000' \ + --var 'AzureSubscriptionID=11111111-1111-1111-1111-111111111111' \ + --var 'AzureLocation=East US' \ + --var 'StateResourceGroupName=pipelines-rg' \ + --var 'StateStorageAccountName=mysa' \ + --var 'StateStorageContainerName=tfstate' \ + --non-interactive +``` + +You can also choose to store these values in a YAML file and pass it to Boilerplate using the `--var-file` flag. + +```yaml title="vars.yml" +AccountName: dev +GitLabGroupName: acme +GitLabRepoName: infrastructure-live +GitLabInstanceURL: https://gitlab.com +SubscriptionName: dev +AzureTenantID: 00000000-0000-0000-0000-000000000000 +AzureSubscriptionID: 11111111-1111-1111-1111-111111111111 +AzureLocation: East US +StateResourceGroupName: pipelines-rg +StateStorageAccountName: my-storage-account +StateStorageContainerName: tfstate +``` + +```bash +boilerplate \ + --template-url 'github.com/gruntwork-io/terragrunt-scale-catalog//templates/boilerplate/azure/gitlab/subscription?ref=v1.0.0' \ + --output-folder . \ + --var-file vars.yml \ + --non-interactive +``` + +::: + +:::note Progress Checklist + + + +::: + +{/* We're using an h3 tag here instead of a markdown heading to avoid adding content to the ToC that won't work when switching between tabs */} +

Provision Azure Bootstrap Resources

+ +Once you've scaffolded out the subscriptions you want to bootstrap, you can use Terragrunt to provision the resources in your Azure subscription. + +If you haven't already, you'll want to authenticate to Azure using the `az` CLI. + +```bash +az login +``` + +:::note Progress Checklist + + + +::: + + +To dynamically configure the Azure provider with a given tenant ID and subscription ID, ensure that you are exporting the following environment variables if you haven't the values via the `az` CLI: + +- `ARM_TENANT_ID` +- `ARM_SUBSCRIPTION_ID` + +For example: + +```bash +export ARM_TENANT_ID="00000000-0000-0000-0000-000000000000" +export ARM_SUBSCRIPTION_ID="11111111-1111-1111-1111-111111111111" +``` + +:::note Progress Checklist + + + +::: + +First, make sure that everything is set up correctly by running a plan in the subscription directory. + +```bash title="name-of-subscription" +terragrunt run --all --non-interactive --provider-cache plan +``` + +:::tip + +We're using the `--provider-cache` flag here to ensure that we don't re-download the Azure provider on every run to speed up the process by leveraging the [Terragrunt Provider Cache Server](https://terragrunt.gruntwork.io/docs/features/provider-cache-server/). + +::: + +:::note Progress Checklist + + + +::: + +Next, apply the changes to your subscription. + +```bash title="name-of-subscription" +terragrunt run --all --non-interactive --provider-cache --no-stack-generate apply +``` + +:::tip + +We're adding the `--no-stack-generate` flag here, as Terragrunt will already have the requisite stack configurations generated, and we don't want to accidentally overwrite any configurations while we have state stored locally before we pull them into remote state. + +::: + +:::note Progress Checklist + + +::: + +{/* We're using an h3 tag here instead of a markdown heading to avoid adding content to the ToC that won't work when switching between tabs */} +

Finalizing Terragrunt configurations

+ +Once you've provisioned the resources in your Azure subscription, you can finalize the Terragrunt configurations using the bootstrap resources we just provisioned. + +First, edit the `root.hcl` file in the root of your project to leverage the storage account we just provisioned. + +If your `root.hcl` file doesn't already have a remote state backend configuration, you'll need to add one that looks like this: + +```hcl title="root.hcl" +locals { + sub_hcl = read_terragrunt_config(find_in_parent_folders("sub.hcl")) + + state_resource_group_name = local.sub_hcl.locals.state_resource_group_name + state_storage_account_name = local.sub_hcl.locals.state_storage_account_name + state_storage_container_name = local.sub_hcl.locals.state_storage_container_name +} + +remote_state { + backend = "azurerm" + generate = { + path = "backend.tf" + if_exists = "overwrite" + } + config = { + resource_group_name = local.state_resource_group_name + storage_account_name = local.state_storage_account_name + container_name = local.state_storage_container_name + key = "${path_relative_to_include()}/tofu.tfstate" + } +} + +generate "provider" { + path = "provider.tf" + if_exists = "overwrite_terragrunt" + contents = < + +::: + +Next, finalize the `.gruntwork/environment-.hcl` file in the root of your project to reference the IDs for the applications we just provisioned. + +You can find the values for the `plan_client_id` and `apply_client_id` by running `terragrunt stack output` in the `bootstrap` directory in `name-of-subscription/bootstrap`. + +```bash +terragrunt stack output +``` + +The relevant bits that you want to extract from the stack output are the following: + +```hcl +bootstrap = { + apply_app = { + client_id = "33333333-3333-3333-3333-333333333333" + } + plan_app = { + client_id = "44444444-4444-4444-4444-444444444444" + } +} +``` + +You can use those values to set the values for `plan_client_id` and `apply_client_id` in the `.gruntwork/environment-.hcl` file. + +:::note Progress Checklist + + + + +::: + +{/* We're using an h3 tag here instead of a markdown heading to avoid adding content to the ToC that won't work when switching between tabs */} +

Pulling the resources into state

+ +Once you've provisioned the resources in your Azure subscription, you can pull the resources into state using the storage account we just provisioned. + +```bash title="name-of-subscription" +terragrunt run --all --non-interactive --provider-cache --no-stack-generate -- init -migrate-state -force-copy +``` + +:::tip + +We're adding the `-force-copy` flag here to avoid any issues with OpenTofu waiting for an interactive prompt to copy up local state. + +::: + +:::note Progress Checklist + + + +::: + +
+
+ +); +} diff --git a/src/components/pipelines/CloudSpecificBootstrap.tsx b/src/components/pipelines/CloudSpecificBootstrap.tsx new file mode 100644 index 0000000000..7693722140 --- /dev/null +++ b/src/components/pipelines/CloudSpecificBootstrap.tsx @@ -0,0 +1,498 @@ +import React from "react"; +import Tabs from '@theme/Tabs'; +import TabItem from '@theme/TabItem'; +import Admonition from '@theme/Admonition'; +import CodeBlock from '@theme/CodeBlock'; +import PersistentCheckbox from '../PersistentCheckbox'; +import AwsRootHcl from '!!raw-loader!./snippets/aws-root.hcl'; +import AzureRootHcl from '!!raw-loader!./snippets/azure-root.hcl'; +import AzureBootstrapOutputHcl from '!!raw-loader!./snippets/azure-bootstrap-output.hcl'; + +interface CloudSpecificBootstrapProps { + gitProvider: 'github' | 'gitlab'; +} + +// Helper functions to generate provider-specific content +const getGitProviderConfig = (provider: 'github' | 'gitlab') => { + if (provider === 'github') { + return { + awsTemplateUrl: 'github.com/gruntwork-io/terragrunt-scale-catalog//templates/boilerplate/aws/github/account?ref=v1.0.0', + azureTemplateUrl: 'github.com/gruntwork-io/terragrunt-scale-catalog//templates/boilerplate/azure/github/subscription?ref=v1.0.0', + orgVarName: 'GitHubOrgName', + repoVarName: 'GitHubRepoName', + orgLabel: 'GitHub Organization', + repoLabel: 'GitHub Repository', + instanceUrlVar: null, + instanceUrlLabel: null, + issuerVar: null, + issuerLabel: null, + }; + } else { + return { + awsTemplateUrl: 'github.com/gruntwork-io/terragrunt-scale-catalog//templates/boilerplate/aws/gitlab/account?ref=v1.0.0', + azureTemplateUrl: 'github.com/gruntwork-io/terragrunt-scale-catalog//templates/boilerplate/azure/gitlab/subscription?ref=v1.0.0', + orgVarName: 'GitLabGroupName', + repoVarName: 'GitLabRepoName', + orgLabel: 'GitLab Group', + repoLabel: 'GitLab Repository', + instanceUrlVar: 'GitLabInstanceURL', + instanceUrlLabel: 'GitLab Instance URL', + issuerVar: 'Issuer', + issuerLabel: 'Issuer URL', + }; + } +}; + +export const CloudSpecificBootstrap = ({ gitProvider }: CloudSpecificBootstrapProps) => { + const config = getGitProviderConfig(gitProvider); + + return ( + <> + + + +

The resources you need provisioned in AWS to start managing resources with Pipelines are:

+ +
    +
  1. An OpenID Connect (OIDC) provider
  2. +
  3. An IAM role for Pipelines to assume when running Terragrunt plan commands
  4. +
  5. An IAM role for Pipelines to assume when running Terragrunt apply commands
  6. +
+ +

For every account you want Pipelines to manage infrastructure in.

+ + + +This may seem like a lot to set up, but the content you need to add to your project is minimal. The majority of the work will be pulled from a reusable catalog that you'll reference in your project. + +If you want to peruse the catalog that's used in the bootstrap process, you can take a look at the terragrunt-scale-catalog repository. + + + +

The process that we'll follow to get these resources ready for Pipelines is:

+ +
    +
  1. Use Boilerplate to scaffold bootstrap configurations in your project for each AWS account
  2. +
  3. Use Terragrunt to provision these resources in your AWS accounts
  4. +
  5. (Optionally) Bootstrap additional AWS accounts until all your AWS accounts are ready for Pipelines
  6. +
+ +

Bootstrap Your Project for AWS

+ +

First, confirm that you have a `root.hcl` file in the root of your project that looks something like this:

+ +{AwsRootHcl} + +

If you don't have a `root.hcl` file, you might need to customize the bootstrapping process, as the Terragrunt scale catalog expects a `root.hcl` file in the root of the project. Please contact [Gruntwork support](/support) for assistance if you need help.

+ +

For each AWS account that needs bootstrapping, we'll use Boilerplate to scaffold the necessary content. Run this command from the root of your project for each account:

+ + +boilerplate \ + --template-url '{config.awsTemplateUrl}' \ + --output-folder . + + + + +You'll need to run this boilerplate command once for each AWS account you want to manage with Pipelines. Boilerplate will prompt you for account-specific values each time. + + + + + +You can reply `y` to all the prompts to include dependencies, and accept defaults unless you want to customize something. + +Alternatively, you could run Boilerplate non-interactively by passing the `--non-interactive` flag. You'll need to supply the relevant values for required variables in that case. + +e.g. + + +{`boilerplate \\ + --template-url '${config.awsTemplateUrl}' \\ + --output-folder . \\ + --var 'AccountName=dev' \\ + --var '${config.orgVarName}=acme' \\ + --var '${config.repoVarName}=infrastructure-live' \\ + ${config.instanceUrlVar ? `--var '${config.instanceUrlVar}=https://gitlab.com' \\` : ''} + --var 'AWSAccountID=123456789012' \\ + --var 'AWSRegion=us-east-1' \\ + --var 'StateBucketName=my-state-bucket' \\ + --non-interactive`} + + +{gitProvider === 'gitlab' && ( + <> +

If you're using a self-hosted GitLab instance, you'll want to make sure the issuer is set correctly when calling Boilerplate.

+ + + {`boilerplate \\ + --template-url '${config.awsTemplateUrl}' \\ + --output-folder . \\ + --var 'AccountName=dev' \\ + --var '${config.orgVarName}=acme' \\ + --var '${config.repoVarName}=infrastructure-live' \\ + --var '${config.instanceUrlVar}=https://gitlab.com' \\ + --var 'AWSAccountID=123456789012' \\ + --var 'AWSRegion=us-east-1' \\ + --var 'StateBucketName=my-state-bucket' \\ + --var '${config.issuerVar}=$$ISSUER_URL$$' \\ + --non-interactive`} + + +)} + +You can also choose to store these values in a YAML file and pass it to Boilerplate using the `--var-file` flag. + + +{`AccountName: dev +${config.orgVarName}: acme +${config.repoVarName}: infrastructure-live +${config.instanceUrlVar ? `${config.instanceUrlVar}: https://gitlab.com` : ''} +AWSAccountID: 123456789012 +AWSRegion: us-east-1 +StateBucketName: my-state-bucket`} + + + +{`boilerplate \\ + --template-url '${config.awsTemplateUrl}' \\ + --output-folder . \\ + --var-file vars.yml \\ + --non-interactive`} + + +
+ + + + + + + +

Provision AWS Bootstrap Resources

+ +

Once you've scaffolded out the accounts you want to bootstrap, you can use Terragrunt to provision the resources in each of these accounts.

+ + + +

Make sure that you authenticate to each AWS account you are bootstrapping using AWS credentials for that account before you attempt to provision resources in it.

+ +

You can follow the documentation here to authenticate with the AWS provider. You are advised to choose an authentication method that doesn't require any hard-coded credentials, like assuming an IAM role.

+ +
+ +

For each account you want to bootstrap, you'll need to run the following commands:

+ +

First, make sure that everything is set up correctly by running a plan in the `bootstrap` directory in `name-of-account/_global` where `name-of-account` is the name of the AWS account you want to bootstrap.

+ + +terragrunt run --all --non-interactive --provider-cache plan + + + + +We're using the `--provider-cache` flag here to ensure that we don't re-download the AWS provider on every run by leveraging the [Terragrunt Provider Cache Server](https://terragrunt.gruntwork.io/docs/features/provider-cache-server/). + + + +Next, apply the changes to your account. + + +terragrunt run --all --non-interactive --provider-cache apply + + + + + + + + + +
+ + + +

The resources you need provisioned in Azure to start managing resources with Pipelines are:

+ +
    +
  1. + An Azure Resource Group for OpenTofu state resources +
      +
    1. + An Azure Storage Account in that resource group for OpenTofu state storage +
        +
      1. + An Azure Storage Container in that storage account for OpenTofu state storage +
      2. +
      +
    2. +
    +
  2. +
  3. + An Entra ID Application to use for plans +
      +
    1. + A Flexible Federated Identity Credential for the application to authenticate with your project on any branch +
    2. +
    3. + A Service Principal for the application to be used in role assignments +
        +
      1. + A role assignment for the service principal to access the Azure subscription +
      2. +
      3. + A role assignment for the service principal to access the Azure Storage Account +
      4. +
      +
    4. +
    +
  4. +
  5. + An Entra ID Application to use for applies +
      +
    1. + A Federated Identity Credential for the application to authenticate with your project on the deploy branch +
    2. +
    3. + A Service Principal for the application to be used in role assignments +
        +
      1. + A role assignment for the service principal to access the Azure subscription +
      2. +
      +
    4. +
    +
  6. +
+ + + +This may seem like a lot to set up, but the content you need to add to your project is minimal. The majority of the work will be pulled from a reusable catalog that you'll reference in your project. + +If you want to peruse the catalog that's used in the bootstrap process, you can take a look at the [terragrunt-scale-catalog](https://github.com/gruntwork-io/terragrunt-scale-catalog) repository. + + + +The process that we'll follow to get these resources ready for Pipelines is: + +
    +
  1. Use Boilerplate to scaffold bootstrap configurations in your project for each Azure subscription
  2. +
  3. Use Terragrunt to provision these resources in your Azure subscription
  4. +
  5. Finalizing Terragrunt configurations using the bootstrap resources we just provisioned
  6. +
  7. Pull the bootstrap resources into state, now that we have configured a remote state backend
  8. +
  9. (Optionally) Bootstrap additional Azure subscriptions until all your Azure subscriptions are ready for Pipelines
  10. +
+ +

Bootstrap Your Project for Azure

+ +

For each Azure subscription that needs bootstrapping, we'll use Boilerplate to scaffold the necessary content. Run this command from the root of your project for each subscription:

+ + +{`boilerplate \\ + --template-url '${config.azureTemplateUrl}' \\ + --output-folder .`} + + + + +You'll need to run this boilerplate command once for each Azure subscription you want to manage with Pipelines. Boilerplate will prompt you for subscription-specific values each time. + + + + + +

You can reply `y` to all the prompts to include dependencies, and accept defaults unless you want to customize something.

+ +

Alternatively, you could run Boilerplate non-interactively by passing the `--non-interactive` flag. You'll need to supply the relevant values for required variables in that case.

+ +

e.g.

+ + +{`boilerplate \\ + --template-url '${config.azureTemplateUrl}' \\ + --output-folder . \\ + --var 'AccountName=dev' \\ + --var '${config.orgVarName}=acme' \\ + --var '${config.repoVarName}=infrastructure-live' \\ + ${config.instanceUrlVar ? `--var '${config.instanceUrlVar}=https://gitlab.com' \\` : ''} + --var 'SubscriptionName=dev' \\ + --var 'AzureTenantID=00000000-0000-0000-0000-000000000000' \\ + --var 'AzureSubscriptionID=11111111-1111-1111-1111-111111111111' \\ + --var 'AzureLocation=East US' \\ + --var 'StateResourceGroupName=pipelines-rg' \\ + --var 'StateStorageAccountName=mysa' \\ + --var 'StateStorageContainerName=tfstate' \\ + --non-interactive`} + + +

You can also choose to store these values in a YAML file and pass it to Boilerplate using the `--var-file` flag.

+ + +{`AccountName: dev +${config.orgVarName}: acme +${config.repoVarName}: infrastructure-live +${config.instanceUrlVar ? `${config.instanceUrlVar}: https://gitlab.com` : ''} +SubscriptionName: dev +AzureTenantID: 00000000-0000-0000-0000-000000000000 +AzureSubscriptionID: 11111111-1111-1111-1111-111111111111 +AzureLocation: East US +StateResourceGroupName: pipelines-rg +StateStorageAccountName: my-storage-account +StateStorageContainerName: tfstate`} + + + +{`boilerplate \\ + --template-url '${config.azureTemplateUrl}' \\ + --output-folder . \\ + --var-file vars.yml \\ + --non-interactive`} + + +
+ + + + + + + +

Provision Azure Bootstrap Resources

+ +

Once you've scaffolded out the subscriptions you want to bootstrap, you can use Terragrunt to provision the resources in your Azure subscription.

+ +

If you haven't already, you'll want to authenticate to Azure using the `az` CLI.

+ + +az login + + + + + + + + + +To dynamically configure the Azure provider with a given tenant ID and subscription ID, ensure that you are exporting the following environment variables if you haven't the values via the `az` CLI: + +- `ARM_TENANT_ID` +- `ARM_SUBSCRIPTION_ID` + +For example: + + +export ARM_TENANT_ID="00000000-0000-0000-0000-000000000000" +export ARM_SUBSCRIPTION_ID="11111111-1111-1111-1111-111111111111" + + + + + + + + +First, make sure that everything is set up correctly by running a plan in the subscription directory. + + +terragrunt run --all --non-interactive --provider-cache plan + + + + +We're using the `--provider-cache` flag here to ensure that we don't re-download the Azure provider on every run to speed up the process by leveraging the [Terragrunt Provider Cache Server](https://terragrunt.gruntwork.io/docs/features/provider-cache-server/). + + + + + + + + + +Next, apply the changes to your subscription. + + +terragrunt run --all --non-interactive --provider-cache --no-stack-generate apply + + + + +We're adding the `--no-stack-generate` flag here, as Terragrunt will already have the requisite stack configurations generated, and we don't want to accidentally overwrite any configurations while we have state stored locally before we pull them into remote state. + + + + + + + + +

Finalizing Terragrunt configurations

+ +Once you've provisioned the resources in your Azure subscription, you can finalize the Terragrunt configurations using the bootstrap resources we just provisioned. + +First, edit the `root.hcl` file in the root of your project to leverage the storage account we just provisioned. + +If your `root.hcl` file doesn't already have a remote state backend configuration, you'll need to add one that looks like this: + + +{AzureRootHcl} + + + + + + + + +

Next, finalize the `.gruntwork/environment-(name-of-subscription).hcl` file in the root of your project to reference the IDs for the applications we just provisioned.

+ +

You can find the values for the `plan_client_id` and `apply_client_id` by running `terragrunt stack output` in the `bootstrap` directory in `name-of-subscription/bootstrap`.

+ + +terragrunt stack output + + +

The relevant bits that you want to extract from the stack output are the following:

+ + +{AzureBootstrapOutputHcl} + + +

You can use those values to set the values for `plan_client_id` and `apply_client_id` in the `.gruntwork/environment-(name-of-subscription).hcl` file.

+ + + + + + + + +

Pulling the resources into state

+ +

Once you've provisioned the resources in your Azure subscription, you can pull the resources into state using the storage account we just provisioned.

+ + +terragrunt run --all --non-interactive --provider-cache --no-stack-generate -- init -migrate-state -force-copy + + + + +We're adding the `-force-copy` flag here to avoid any issues with OpenTofu waiting for an interactive prompt to copy up local state. + + + + + + + + + +
+
+ + ); +}; + +export default CloudSpecificBootstrap; diff --git a/src/components/pipelines/CloudSpecificBootstrap.tsx-attempt-1 b/src/components/pipelines/CloudSpecificBootstrap.tsx-attempt-1 new file mode 100644 index 0000000000..907d53a3ce --- /dev/null +++ b/src/components/pipelines/CloudSpecificBootstrap.tsx-attempt-1 @@ -0,0 +1,498 @@ +import React from "react"; +import Tabs from '@theme/Tabs'; +import TabItem from '@theme/TabItem'; +import Admonition from '@theme/Admonition'; +import CodeBlock from '@theme/CodeBlock'; +import PersistentCheckbox from '../PersistentCheckbox'; +import AwsRootHcl from '!!raw-loader!./snippets/aws-root.hcl'; +import AzureRootHcl from '!!raw-loader!./snippets/azure-root.hcl'; +import AzureBootstrapOutputHcl from '!!raw-loader!./snippets/azure-bootstrap-output.hcl'; + +interface CloudSpecificBootstrapProps { + gitProvider: 'github' | 'gitlab'; +} + +// Helper functions to generate provider-specific content +const getGitProviderConfig = (provider: 'github' | 'gitlab') => { + if (provider === 'github') { + return { + awsTemplateUrl: 'github.com/gruntwork-io/terragrunt-scale-catalog//templates/boilerplate/aws/github/account?ref=v1.0.0', + azureTemplateUrl: 'github.com/gruntwork-io/terragrunt-scale-catalog//templates/boilerplate/azure/github/subscription?ref=v1.0.0', + orgVarName: 'GitHubOrgName', + repoVarName: 'GitHubRepoName', + orgLabel: 'GitHub Organization', + repoLabel: 'GitHub Repository', + instanceUrlVar: null, + instanceUrlLabel: null, + issuerVar: null, + issuerLabel: null, + }; + } else { + return { + awsTemplateUrl: 'github.com/gruntwork-io/terragrunt-scale-catalog//templates/boilerplate/aws/gitlab/account?ref=v1.0.0', + azureTemplateUrl: 'github.com/gruntwork-io/terragrunt-scale-catalog//templates/boilerplate/azure/gitlab/subscription?ref=v1.0.0', + orgVarName: 'GitLabGroupName', + repoVarName: 'GitLabRepoName', + orgLabel: 'GitLab Group', + repoLabel: 'GitLab Repository', + instanceUrlVar: 'GitLabInstanceURL', + instanceUrlLabel: 'GitLab Instance URL', + issuerVar: 'Issuer', + issuerLabel: 'Issuer URL', + }; + } +}; + +export const CloudSpecificBootstrap = ({ gitProvider }: CloudSpecificBootstrapProps) => { + const config = getGitProviderConfig(gitProvider); + + return ( + <> + + + +

The resources you need provisioned in AWS to start managing resources with Pipelines are:

+ +
    +
  1. An OpenID Connect (OIDC) provider
  2. +
  3. An IAM role for Pipelines to assume when running Terragrunt plan commands
  4. +
  5. An IAM role for Pipelines to assume when running Terragrunt apply commands
  6. +
+ +

For every account you want Pipelines to manage infrastructure in.

+ + + +This may seem like a lot to set up, but the content you need to add to your project is minimal. The majority of the work will be pulled from a reusable catalog that you'll reference in your project. + +If you want to peruse the catalog that's used in the bootstrap process, you can take a look at the terragrunt-scale-catalog repository. + + + +

The process that we'll follow to get these resources ready for Pipelines is:

+ +
    +
  1. Use Boilerplate to scaffold bootstrap configurations in your project for each AWS account
  2. +
  3. Use Terragrunt to provision these resources in your AWS accounts
  4. +
  5. (Optionally) Bootstrap additional AWS accounts until all your AWS accounts are ready for Pipelines
  6. +
+ +

Bootstrap Your Project for AWS

+ +

First, confirm that you have a `root.hcl` file in the root of your project that looks something like this:

+ +{AwsRootHcl} + +

If you don't have a `root.hcl` file, you might need to customize the bootstrapping process, as the Terragrunt scale catalog expects a `root.hcl` file in the root of the project. Please contact [Gruntwork support](/support) for assistance if you need help.

+ +

For each AWS account that needs bootstrapping, we'll use Boilerplate to scaffold the necessary content. Run this command from the root of your project for each account:

+ + +boilerplate \ + --template-url '{config.awsTemplateUrl}' \ + --output-folder . + + + + +You'll need to run this boilerplate command once for each AWS account you want to manage with Pipelines. Boilerplate will prompt you for account-specific values each time. + + + + + +You can reply `y` to all the prompts to include dependencies, and accept defaults unless you want to customize something. + +Alternatively, you could run Boilerplate non-interactively by passing the `--non-interactive` flag. You'll need to supply the relevant values for required variables in that case. + +e.g. + + +{`boilerplate \\ + --template-url '${config.awsTemplateUrl}' \\ + --output-folder . \\ + --var 'AccountName=dev' \\ + --var '${config.orgVarName}=acme' \\ + --var '${config.repoVarName}=infrastructure-live' \\ + ${config.instanceUrlVar ? `--var '${config.instanceUrlVar}=https://gitlab.com' \\` : ''} + --var 'AWSAccountID=123456789012' \\ + --var 'AWSRegion=us-east-1' \\ + --var 'StateBucketName=my-state-bucket' \\ + --non-interactive`} + + +{gitProvider === 'gitlab' && ( + <> +

If you're using a self-hosted GitLab instance, you'll want to make sure the issuer is set correctly when calling Boilerplate.

+ + + {`boilerplate \\ + --template-url '${config.awsTemplateUrl}' \\ + --output-folder . \\ + --var 'AccountName=dev' \\ + --var '${config.orgVarName}=acme' \\ + --var '${config.repoVarName}=infrastructure-live' \\ + --var '${config.instanceUrlVar}=https://gitlab.com' \\ + --var 'AWSAccountID=123456789012' \\ + --var 'AWSRegion=us-east-1' \\ + --var 'StateBucketName=my-state-bucket' \\ + --var '${config.issuerVar}=$$ISSUER_URL$$' \\ + --non-interactive`} + + +)} + +You can also choose to store these values in a YAML file and pass it to Boilerplate using the `--var-file` flag. + + +{`AccountName: dev +${config.orgVarName}: acme +${config.repoVarName}: infrastructure-live +${config.instanceUrlVar ? `${config.instanceUrlVar}: https://gitlab.com` : ''} +AWSAccountID: 123456789012 +AWSRegion: us-east-1 +StateBucketName: my-state-bucket`} + + + +{`boilerplate \\ + --template-url '${config.awsTemplateUrl}' \\ + --output-folder . \\ + --var-file vars.yml \\ + --non-interactive`} + + +
+ + + + + + + +

Provision AWS Bootstrap Resources

+ +

Once you've scaffolded out the accounts you want to bootstrap, you can use Terragrunt to provision the resources in each of these accounts.

+ + + +

Make sure that you authenticate to each AWS account you are bootstrapping using AWS credentials for that account before you attempt to provision resources in it.

+ +

You can follow the documentation here to authenticate with the AWS provider. You are advised to choose an authentication method that doesn't require any hard-coded credentials, like assuming an IAM role.

+ +
+ +

For each account you want to bootstrap, you'll need to run the following commands:

+ +

First, make sure that everything is set up correctly by running a plan in the `bootstrap` directory in `name-of-account/_global` where `name-of-account` is the name of the AWS account you want to bootstrap.

+ + +terragrunt run --all --non-interactive --provider-cache plan + + + + +We're using the `--provider-cache` flag here to ensure that we don't re-download the AWS provider on every run by leveraging the [Terragrunt Provider Cache Server](https://terragrunt.gruntwork.io/docs/features/provider-cache-server/). + + + +Next, apply the changes to your account. + + +terragrunt run --all --non-interactive --provider-cache apply + + + + + + + + + +
+ + + +

The resources you need provisioned in Azure to start managing resources with Pipelines are:

+ +
    +
  1. + An Azure Resource Group for OpenTofu state resources +
      +
    1. + An Azure Storage Account in that resource group for OpenTofu state storage +
        +
      1. + An Azure Storage Container in that storage account for OpenTofu state storage +
      2. +
      +
    2. +
    +
  2. +
  3. + An Entra ID Application to use for plans +
      +
    1. + A Flexible Federated Identity Credential for the application to authenticate with your project on any branch +
    2. +
    3. + A Service Principal for the application to be used in role assignments +
        +
      1. + A role assignment for the service principal to access the Azure subscription +
      2. +
      3. + A role assignment for the service principal to access the Azure Storage Account +
      4. +
      +
    4. +
    +
  4. +
  5. + An Entra ID Application to use for applies +
      +
    1. + A Federated Identity Credential for the application to authenticate with your project on the deploy branch +
    2. +
    3. + A Service Principal for the application to be used in role assignments +
        +
      1. + A role assignment for the service principal to access the Azure subscription +
      2. +
      +
    4. +
    +
  6. +
+ + + +This may seem like a lot to set up, but the content you need to add to your project is minimal. The majority of the work will be pulled from a reusable catalog that you'll reference in your project. + +If you want to peruse the catalog that's used in the bootstrap process, you can take a look at the [terragrunt-scale-catalog](https://github.com/gruntwork-io/terragrunt-scale-catalog) repository. + + + +The process that we'll follow to get these resources ready for Pipelines is: + +
    +
  1. Use Boilerplate to scaffold bootstrap configurations in your project for each Azure subscription
  2. +
  3. Use Terragrunt to provision these resources in your Azure subscription
  4. +
  5. Finalizing Terragrunt configurations using the bootstrap resources we just provisioned
  6. +
  7. Pull the bootstrap resources into state, now that we have configured a remote state backend
  8. +
  9. (Optionally) Bootstrap additional Azure subscriptions until all your Azure subscriptions are ready for Pipelines
  10. +
+ +

Bootstrap Your Project for Azure

+ +

For each Azure subscription that needs bootstrapping, we'll use Boilerplate to scaffold the necessary content. Run this command from the root of your project for each subscription:

+ + +{`boilerplate \\ + --template-url '${config.azureTemplateUrl}' \\ + --output-folder .`} + + + + +You'll need to run this boilerplate command once for each Azure subscription you want to manage with Pipelines. Boilerplate will prompt you for subscription-specific values each time. + + + + + +

You can reply `y` to all the prompts to include dependencies, and accept defaults unless you want to customize something.

+ +

Alternatively, you could run Boilerplate non-interactively by passing the `--non-interactive` flag. You'll need to supply the relevant values for required variables in that case.

+ +

e.g.

+ + +{`boilerplate \\ + --template-url '${config.azureTemplateUrl}' \\ + --output-folder . \\ + --var 'AccountName=dev' \\ + --var '${config.orgVarName}=acme' \\ + --var '${config.repoVarName}=infrastructure-live' \\ + ${config.instanceUrlVar ? `--var '${config.instanceUrlVar}=https://gitlab.com' \\` : ''} + --var 'SubscriptionName=dev' \\ + --var 'AzureTenantID=00000000-0000-0000-0000-000000000000' \\ + --var 'AzureSubscriptionID=11111111-1111-1111-1111-111111111111' \\ + --var 'AzureLocation=East US' \\ + --var 'StateResourceGroupName=pipelines-rg' \\ + --var 'StateStorageAccountName=mysa' \\ + --var 'StateStorageContainerName=tfstate' \\ + --non-interactive`} + + +

You can also choose to store these values in a YAML file and pass it to Boilerplate using the `--var-file` flag.

+ + +{`AccountName: dev +${config.orgVarName}: acme +${config.repoVarName}: infrastructure-live +${config.instanceUrlVar ? `${config.instanceUrlVar}: https://gitlab.com` : ''} +SubscriptionName: dev +AzureTenantID: 00000000-0000-0000-0000-000000000000 +AzureSubscriptionID: 11111111-1111-1111-1111-111111111111 +AzureLocation: East US +StateResourceGroupName: pipelines-rg +StateStorageAccountName: my-storage-account +StateStorageContainerName: tfstate`} + + + +{`boilerplate \\ + --template-url '${config.azureTemplateUrl}' \\ + --output-folder . \\ + --var-file vars.yml \\ + --non-interactive`} + + +
+ + + + + + + +

Provision Azure Bootstrap Resources

+ +

Once you've scaffolded out the subscriptions you want to bootstrap, you can use Terragrunt to provision the resources in your Azure subscription.

+ +

If you haven't already, you'll want to authenticate to Azure using the `az` CLI.

+ + +az login + + + + + + + + + +To dynamically configure the Azure provider with a given tenant ID and subscription ID, ensure that you are exporting the following environment variables if you haven't the values via the `az` CLI: + +- `ARM_TENANT_ID` +- `ARM_SUBSCRIPTION_ID` + +For example: + + +export ARM_TENANT_ID="00000000-0000-0000-0000-000000000000" +export ARM_SUBSCRIPTION_ID="11111111-1111-1111-1111-111111111111" + + + + + + + + +First, make sure that everything is set up correctly by running a plan in the subscription directory. + + +terragrunt run --all --non-interactive --provider-cache plan + + + + +We're using the `--provider-cache` flag here to ensure that we don't re-download the Azure provider on every run to speed up the process by leveraging the [Terragrunt Provider Cache Server](https://terragrunt.gruntwork.io/docs/features/provider-cache-server/). + + + + + + + + + +Next, apply the changes to your subscription. + + +terragrunt run --all --non-interactive --provider-cache --no-stack-generate apply + + + + +We're adding the `--no-stack-generate` flag here, as Terragrunt will already have the requisite stack configurations generated, and we don't want to accidentally overwrite any configurations while we have state stored locally before we pull them into remote state. + + + + + + + + +

Finalizing Terragrunt configurations

+ +Once you've provisioned the resources in your Azure subscription, you can finalize the Terragrunt configurations using the bootstrap resources we just provisioned. + +First, edit the `root.hcl` file in the root of your project to leverage the storage account we just provisioned. + +If your `root.hcl` file doesn't already have a remote state backend configuration, you'll need to add one that looks like this: + + +{AzureRootHcl} + + + + + + + + +

Next, finalize the `.gruntwork/environment-(name-of-subscription).hcl` file in the root of your project to reference the IDs for the applications we just provisioned.

+ +

You can find the values for the `plan_client_id` and `apply_client_id` by running `terragrunt stack output` in the `bootstrap` directory in `name-of-subscription/bootstrap`.

+ + +terragrunt stack output + + +

The relevant bits that you want to extract from the stack output are the following:

+ + +{AzureBootstrapOutputHcl} + + +

You can use those values to set the values for `plan_client_id` and `apply_client_id` in the `.gruntwork/environment-(name-of-subscription).hcl` file.

+ + + + + + + + +

Pulling the resources into state

+ +

Once you've provisioned the resources in your Azure subscription, you can pull the resources into state using the storage account we just provisioned.

+ + +terragrunt run --all --non-interactive --provider-cache --no-stack-generate -- init -migrate-state -force-copy + + + + +We're adding the `-force-copy` flag here to avoid any issues with OpenTofu waiting for an interactive prompt to copy up local state. + + + + + + + + + +
+
+ + ); +}; + +export default CloudSpecificBootstrap; diff --git a/src/components/pipelines/snippets/aws-root.hcl b/src/components/pipelines/snippets/aws-root.hcl new file mode 100644 index 0000000000..94a19d7743 --- /dev/null +++ b/src/components/pipelines/snippets/aws-root.hcl @@ -0,0 +1,32 @@ +locals { + account_hcl = read_terragrunt_config(find_in_parent_folders("account.hcl")) + state_bucket_name = local.account_hcl.locals.state_bucket_name + + region_hcl = read_terragrunt_config(find_in_parent_folders("region.hcl")) + aws_region = local.region_hcl.locals.aws_region +} + +remote_state { + backend = "s3" + generate = { + path = "backend.tf" + if_exists = "overwrite" + } + config = { + bucket = local.state_bucket_name + region = local.aws_region + key = "${path_relative_to_include()}/tofu.tfstate" + encrypt = true + use_lockfile = true + } +} + +generate "provider" { + path = "provider.tf" + if_exists = "overwrite_terragrunt" + contents = <