AWS site configuration implemented with Hashicorp Terraform.
Switch branches/tags
Nothing to show
Clone or download
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Permalink
Failed to load latest commit information.
.git-crypt
.vscode
ecs_deployer_lambda
github_ecs_pipeline
images
single_server_cluster
.gitattributes
.gitignore
README.md
dns.tf
ecs.tf
ecs_deployer_lambda.tf
ecs_deployer_lambda.zip
load_balancer.tf
notifications.tf
phpbb-container-definitions.json
phpbb_pipeline.tf
pipelines.tf
provider.tf
secrets.tf
servers.tf
superchallengebot-container-definitions.json
superchallengebot_pipeline.tf
terraform.tfstate
variables.tf

README.md

Server and deployment configuration using Terraform

Instead of creating dozens of AWS resources by hand, we use Terraform to create everything for us. This adds another layer of tools, but it allows us to document everything we do, to reconfigure many things at once, and to add actual comments explaining what's going on.

Unlocking files protected with git crypt and GPG

First of all, you'll need to install git-crypt and GPG. And you'll need to be one of the people whose key has been added to this repo. Then you can run:

git crypt unlock

This will locally decrypt several files that are encrypted with GPG.

Setting up AWS access

You'll need to set the following variables in your shell:

export AWS_ACCESS_KEY_ID="..."
export AWS_SECRET_ACCESS_KEY="..."
export GITHUB_TOKEN="..."

Generating and appling a terraform plan

To initialize Terraform's modules, run:

terraform get

To see what resources Terraform would like to create, update or destroy, run:

terraform plan -module-depth=-1 -out=plan.tfplan

Read this output carefully! You can destroy the site if you're not paying attention. If the output looks good, you can run:

terraform apply plan.tfplan

...which will update all our AWS resources as described in the plan.

This process will typically look something like:

Screenshot of Terraform planning and applying a DNS update

Terraform modules

Many of the subdirectories of this project contain "modules", which describe a group of related AWS resources. A module can be instantiated by calling module and passing it a series of parameters.

By convention, the parameters of a module are described in a file named $MODULE_NAME/variables.tf, and all parameters should be documented.

Testing this on your own account.

To test this on your own account, delete terraform.tfstate and secrets.tf, and edit variables.tf to use your own domain names, etc. You'll need to set up the security credential environment variables as described above.

Then, create the following resources manually in your account:

  • An RDS server running MySQL, with the databases required by your containers.
  • An EBS volume with the name "language-learners:/data", with the files required by your containers.

These aren't managed by Terraform because they contain persistent data that we don't want to be accidentally destroyed by sloppy refactorings or user error.

Pipelines

These Terraform definitions are based around the idea of CodePipeline build and deployment pipelines. For example:

Screenshot of a CodePipeline pipeline with Source, Build, Approve and Deploy stages

Note that we do not recommend CodePipeline over alternatives like TravisCI or GoCD. CodePipeline has a number of unfortunate limitations and it's a nuisance to set up. But if you need to centralize all billing on one account, it's good enough to get by.