Skip to content

jrwhite17/terraform-project

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

15 Commits
 
 
 
 
 
 
 
 

Repository files navigation

terraform-project

  1. S3 Bucket with SSE Enabled.
    Module aws-s3
  2. RDS Security Group
    Module aws-rds
  3. MySQL RDS in a Private subnet
    Module aws-rds | Input from static private_subnet_ids located in Terragrunt env.hcl
  4. EC2 Security Group
    Module aws-ec2
  5. EC2 in a Private Subnet
    Module aws-ec2 | Input from static private_subnet_ids located in Terragrunt env.hcl
  6. EC2 should be able to talk to MySQL
    Module aws-rds | Input from aws-ec2.private_ip for RDS Security Group Ingress Traffic
  7. ALB that uses ACM for TLS certs.
    Module aws-alb
  8. EC2 should only allow traffic from ACM. (Should this be ALB - and not ACM?)
    Module aws-ec2 | Input from aws-alb.security_group.id (this security group id will be attached to ec2 and and alb and only allow traffic to itself)
  9. ELB should only allow access from your IP.
    Module aws-alb | Input from static developer_ip located in Terragrunt terragrunt.hcl

Goals

  1. Folder structure and skeleton. Perfection is not necessary.
    See terraform and terragrunt directories and more detailed information below.
    Note: The generic-random module generates a random deployment_id for the terraform resources and random_password for the database.
    The deployment_id keeps terraform resource names unique, allowing for multiple deployments in the development environment.
    This module is intended to be generic as it's outputs deployment_id and random_password could be used for other environment specific modules (such as Azure, GCP, VMware, etc.)

  2. DRY code and achieve maximum reusability.
    See terraform and terragrunt directories. Terragrunt (a wrapper around Terraform) is used to abstract the terraform modules and provide environment specific characteristics for each targeted environment (development, test and production).
    An example of this can be found in the /terrafrom/terragrunt directory.
    The environment folder structure contains AWS Account -> AWS Region -> Specific Environment Configuration.
    Terragrunt will orchestrate environment specific inputs and dependencies between the Terraform modules (located in the /terraform/modules directory).

  3. List of tools to validate terraform code

  • Visual Studio Code HashiCorp Terraform Extension
    This plugin provides syntax highlighting and autocompletion for the Visual Studio Code IDE. This is very helpful for local development activities.
  • Terragrunt Mock Outputs
    Terragrunt offers a feature that allows users to mock outputs for executing the "Terraform Plan" task. This is a great method for validating input depenedencies between Terraform modules.
  • TFLint (is a Pluggable Terraform Linter)
    This Linter provides a frameworks that alerts on possible errors (like illegal types for cloud providers (AWS/Azure/GCP)), warns about deprecated sytax or unused declarations and enforces best practices and naming conventions.
  • Python/Bash/Other Scripting
    Using a coding/scripting language, we could write tests to validate the end state of the terraform deployment. We could check for characteristics like appropriate ports being accessible on the RDS isntance or network traffic is accessible form the ALB.
  1. Dev/Test and Prod Deployment folder structure.
    See terragrunt directory.

  2. Build a CI pipeline that will allow you to target AWS account
    Before building the CI pipelines we need to understand the security and compliance posture of the project. We also need to determine the targeted environment(s).
    We also need to develop a deployment strategy that promotes the project through the various environments (development, test and production).
    Using Git Branching, we can develop a deployment strategy that accommodates the application/project's new release maturity.

    For Example:

    • Feature1 (environment target: development)
    • Feature2 (environment target: development)
    • Feature3 (environment target: development)
    • Release3 (environment target: test)
    • Master (environment target: production)


with.. Feature1 (development) -> Release3 (test) -> Master (production)
Depending on the scope of the branch, the target environment changes.

After this deployment strategy has been developed, we can create logic for the CI pipeline to retrieve the appropriate authentication method for manipulating the targeted environment.

Assuming the security and compliance posture of this project is minimal, I would recommend using GitHub actions as the CI pipeline orchestration tool/service.
GitHub actions has a large collection of workflow steps that are supported and maintained by both the open source and commercial vendor communities.
For example: HashiCorp maintains many GitHub actions for working with Terraform.
GitHub also offers a native solution for authenticating with Amazon Web Services using OpenID Connect. This is very beneficial to our use case since our targeted environment is AWS and we wouldn't have to worry about storing AWS credentials as long-lived GitHub secrets.

We also need to determine how long our development deployments will live. Lets assume that we have automated validation and acceptance testing for post deployment. With this being the case, we will terminate the terraform resources after validation and acceptance testing have been executed.
Of course, we could always leave the terraform resources up and running for manual validation and acceptance testing.
Note: If we were to leave the terraform resources up and running, this will increase the Cloud expenses.

The basic workflow of the pipeline:
Build - Lint the Terraform Code, Execute a Terragrunt/Terraform Plan
Deploy - Execute a Terragrunt/Terraform Deploy
Test - Execute Validation and Acceptance Tests
Clean Up - Terragrunt/Terraform Destroy

If we have enough trust in our validation and acceptance tests, we could automate the project's promotion from:
feature -> release -> master branches.

If there are security concerns about using GitHub actions as our pipeline orchestration tool, we could look at some of the native AWS services to orchestrate our CI pipeline.
AWS offers:

  • AWS CodeCommit - Source code repository
  • AWS CodeBuild - Compiles code and executes tests
  • AWS CodeDeploy - Automates deployment of artifacts
  • AWS CodePipeline - Automates release pipelines

These native AWS developer services provide seamless integration with the AWS collection of cloud services.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published