Skip to content
Avalon Turnkey - Terraform Scripts for Running Avalon on AWS
HCL Shell
Branch: master
Clone or download
Fetching latest commit…
Cannot retrieve the latest commit at this time.

Files

Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
modules/password
scripts
.gitignore
README.md
alb.tf
build.tf
compose.tf
db-avalon.tf
db-fcrepo.tf
db-setup.tf
diagram.jpg
efs.tf
email.tf
main.tf
outputs.tf
redis.tf
s3.tf
ssm.tf
terraform.tfvars.example
transcoder.tf
variables.tf
versions.tf
vpc.tf

README.md

Turnkey solution for Avalon on AWS, using Terraform

Goals

The goal of this solution is to provide a simple, cost-effective way to put Avalon on the cloud, while remaining resilient, performant and easy to manage. It aims to serve collections with low to medium traffic.

Architecture diagram

Getting started

Prerequisites

  1. Download and install Terraform 0.12+. The scripts have been upgraded to HCL 2 and therefore incompatible with earlier versions of Terraform.

  2. Clone this repo

  3. Create or import an EC2 key-pair for your region.

  4. Create an S3 bucket to hold the terraform state, this is useful when executing terraform on multiple machines (or working as a team) because it allows state to remain in sync.

  5. Copy dev.tfbackend.example to dev.tfbackend and fill in the previously created bucket name.

    bucket = "my-terraform-state"
    key    = "state.tfstate"
    region = "us-east-1"
    
  6. Create an IAM user that Fedora will use to sign its S3 requests.

  7. Create a public hosted zone in Route53; Terraform will automatically manage DNS entries in this zone. A registered domain name is needed to pair with the Route53 hosted zone. You can use Route53 to register a new domain or use Route53 to manage an existing domain.

  8. Copy terraform.tfvars.example to terraform.tfvars and fill in the relevant information:

    environment         = "dev"
    hosted_zone_name    = "mydomain.org"
    ec2_keyname         = "my-ec2-key"
    ec2_private_keyfile = "/local/path/my-ec2-key.pem"
    stack_name          = "mystack"
    fcrepo_binary_bucket_username = "iam_user"
    fcrepo_binary_bucket_access_key = "***********"
    fcrepo_binary_bucket_secret_key = "***********"
    tags {
      Creator    = "me"
      AnotherTag = "Whatever value I want!"
    }
    
    • Note: You can have more than one variable file and pass the name on the command line to manage more than one stack.
  9. Execute terraform init -reconfigure -backend-config=dev.tfbackend.

Bringing up the stack

To see the changes Terraform will make:

terraform plan

To actually make those changes:

terraform apply

Be patient, the script attempts to register SSL certificates for your domains and AWS cert validation process can take from 5 to 30 minutes.

Extra settings

Email

In order for Avalon to send mails using AWS, you need to add these variables to the terraform.tfvars file and make sure these email addresses are verified in Simple Email Service:

email_comments      = "comments@mydomain.org"
email_notification  = "notification@mydomain.org"
email_support       = "support@mydomain.org"

Authentication

Turnkey comes bundled with Persona by default but can be configured to work with other authentication strategies by using the appropriate omniauth gems. Refer to this doc for integration instruction.

Maintenance

Update the stack

You can proceed with terraform plan and terraform apply as often as you want to see and apply changes to the stack. Changes you make to the *.tf files will automatically be reflected in the resources under Terraform's control.

Destroy the stack

Special care must be taken if you want to retain all data when destroying the stack. If that wasn't a concern, you can simply run

terraform destroy

Update the containers

Since Avalon, Fedora, Solr and Nginx are running inside Docker containers managed by docker-compose, you can SSH to the EC2 box and run docker-compose commands as usual.

docker-compose pull
docker-compose up -d

Performance & Cost

The EC2 instances are sized to minimize cost and allow occasional bursts (mostly by using t3). However if your system is constantly utilizing 30%+ CPU, it might be cheaper & more performant to switch to larger t2 or m5 instances.

Cost can be further reduced by using reserved instances - commiting to buy EC2 for months or years.

Out of the box, the system can service up to 100 concurrent streaming users without serious performance degradation. More performance can be achieved by scaling up using a larger EC2 instance.

You can’t perform that action at this time.