This repository contains the infrastructure as code to support the DiAGRAM application.
DiAGRAM's application code lives in a separate repository, here.
The different infrastructure components required by the DiAGRAM application are separated out into different modules. Briefly, they are comprised of:
-
github-actions-user/
: A module to define an IAM user, used by the DiAGRAM application's GitHub Actions workflows. -
container-registry/
: A module to define a container registry, used to store the custom Lambda container image, used in the DiAGRAM application's backend. -
lambda-api/
: A module to define a Lambda function, and API gateway integration, used to serve the DiAGRAM application's backend. -
website/
: A module to define an S3 bucket as a static website---used to host the DiAGRAM application's frontend---and to define the content delivery network for the application's frontend and backend.
To be able to deploy and test changes to the DiAGRAM application's infrastructure, you will first need an AWS IAM user provisioned for you by The National Archives (TNA).
You will then need to generate an access key and corresponding secret. You can
do so by logging into your TNA-provisioned AWS IAM, and navigating to
Services
-> Security, Identity, & Compliance
-> IAM
-> Users
. Once
here, locate and select your username in the displayed table, navigate to your
Security credentials
tab, and select Create access key
. Naturally, your
access key and its associated secret should be treated like a password, and
stored appropriately. You can read more about managing access keys
from AWS' own documentation.
Once you have your access key and corresponding secret, from within each
terraform module, create a file secrets.auto.tfvars
, and populate it with the
following content:
secrets = {
tna_aws_access_key = "<YOUR-ACCESS-KEY-HERE>"
tna_aws_secret_key = "<YOUR-SECRET-KEY-HERE>"
}
Replacing <YOUR-ACCESS-KEY-HERE>
and <YOUR-SECRET-KEY-HERE>
with your
access key and secret, as generated above.
Next, you will need to add the AWS account IDs of the three development
environments (live
, stage
, and dev
) as provisioned by TNA, to the
secrets.auto.tfvars
file. This file should then have the format:
secrets = {
tna_aws_access_key = "<YOUR-ACCESS-KEY-HERE>"
tna_aws_secret_key = "<YOUR-SECRET-KEY-HERE>"
service = {
live = {
account = "<LIVE-ENV-ACCOUNT-ID>"
}
stage = {
account = "<STAGE-ENV-ACCOUNT-ID>"
}
dev = {
account = "<DEV-ENV-ACCOUNT-ID>"
}
}
# Allowed IPs only required for website component
allowed_ips = []
}
The DiAGRAM application and its supporting infrastructure is deployed into
three separate environments: live
, stage
, and dev
. Each of these separate
environments corresponds to a separate AWS account. These separate environments
are managed with terraform workspaces.
You should create these workspaces from the root of each module with:
terraform init
for workspace in live stage dev;
do
terraform workspace new $workspace;
done
You can then select an environment to work from with eg.
terraform workspace select dev
.
The infrastructure supporting the DiAGRAM application must be deployed in stages. This is because the Lambda function serving the backend requests cannot be created until a customer Lambda image has been pushed to AWS ECR. To deploy the application, from scratch, the following steps must be followed:
-
Configure your AWS credentials, and the terraform workspaces, as detailed in the above sections.
-
terraform apply
thegithub-actions-user/
module in each workspace. -
For each workspace, view the access key and secret generated by the previous step (
terraform output gha_access_key_id
, andterraform output gha_access_key_secret
), and add these values to the corresponding environment in the application code's GitHub Environment secrets.In other words, you should add the access key and secret generated from the
live
workspace to the GitHublive
environment, the access key and secret generated from thedev
workspace to the GitHubdev
environment, and so on.The access key should be added as a secret named
AWS_ACCESS_KEY_ID
, and the secret should be added as a secret namedAWS_SECRET_ACCESS_KEY
. -
terraform apply
thecontainer-registry/
module in each workspace. -
Define the
ECR_REPO_NAME
secret for thelive
,stage
anddev
environments on GitHub, based on theecr_repo_name
terraform variable, and then trigger the CI jobupdate-backend
from each GitHub environment. This will build and push the custom Lambda container image used by the backend to the container registry provisioned in the previous step. -
terraform apply
thelambda-api/
module in each workspace. This will create a Lambda function, using the custom Lambda container image pushed in the previous step, and API Gateway integration. -
Run
terraform apply -target module.tna_zones
and thenterraform apply
, for each workspace in thewebsite/
module. Add any IPs that need access to the dev and stage sites to thesecrets.auto.tfvars
allowed_ips
list. -
Trigger the CI job
update-frontend
from each GitHub environment. This will build the static site frontend, and upload it to the website's S3 bucket, as provisioned in the previous step.
After a successful deployment of the DiAGRAM application's infrastructure, the DiAGRAM application should be accessible from: https://diagram.nationalarchives.gov.uk.
You should be able to curl
the application's Lambda backend with:
curl -X POST "diagram.nationalarchives.gov.uk/api/test/is_alive"
If successful, this command should return the json {"alive":true}
.
Each environment provides a Route53 provider. Amazon will provide four name
servers for each R53 service (within each environment). These NS values are
provided to The National Archives who will delegate the DNS for the three
domains to each R53 service. For example, TNA configure
staging-diagram.nationalarchives.gov.uk
to pass DNS resolution to the JR
managed R53 via the four unique name server values. See website/README.md
for current values.
The National Archives independently load a wildcard SSL certificate that can be used by the other AWS web services but cannot be viewed by Jumping Rivers, nor by any non-AWS services.