Skip to content

felixdagnon/terraform-Iacdevops-using-aws-codepipeline

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 

Repository files navigation

IacDevops-on-AWS-CICD-defined-through-Terraform-using-CodePipeline-CodeCommit-CodeBuild

Introduction

As far as code functionality, my piece handles the following requirements:

  • Provision Permission Sets with Access keys into the AWS Account that hosts the AWS IAM service;

  • Link these Permission Sets to Groups and AWS Accounts;

  • CI/CD to deploy the Permission Sets automatically when a new set or Group/Account link is configured and pushed to the code repository;

  • the code translates into infrastructure which will be built by running terraform commands.

Users assume Roles assigned to Groups that they are part of, which, in turn, are assigned to AWS Accounts.

Permission Sets dictate the level of access a User has.

Different teams will have different Permission Sets assigned for different Accounts.

The challenge comes from making this process a repeatable one with as little human interaction as possible,

and the resptective manager approve manually by Email before deploying to next environement ( staging or production).

Let focus on CI/CD components from the diagram above.

image

AWS provides solutions for all of my needs:

  • AWS CodeCommit acts as a git server, providing repositories to store code and allows interaction with your code through the git cli;

  • AWS CodeBuild acts as a Build environment/engine and is used to execute instructions in multiple stages,

to build and pack code in the shape of a usable artifact;

  • AWS CodePipeline is the CI/CD framework that links the other two services together (as well as others) through executable stages;

  • AWS S3 to keep any artifacts that result out of a successful Build Stage, for later use and posterity.

Before delving into details, let’s first take a look at the picture.

image

We are going to focus on IaC DevOps with AWS CodePipeline to implemente VPC with WebApp and DB tiers, EC2 instances,

Bastion host and Security groups, Application Load Balancer, and also Auto Scaling with Launch Templates.

All these resources will be implement with Terraform using TF Config files for all the infrastructure supposed to build.

image

First create GitHub repository and then let’s check into our GitHub repository the Terraform manifests related to our AWS use case.

image

The next step, we will create AWS CodePipeline. So as part of that, when we are creating the CodePipeline, we will reference the

source as our GitHub repository we have created.

We will create a CodeBuild project for dev environment in deploy stage.

In this overall implementation, we are not going to see any deploy stage with CodeDeploy or any other tools provided by AWS,

because AWS doesn't have any tools related to pipeline to deploy the Terraform configurations.

For that purpose, we need to leverage the AWS CodeBuild tool as our deploy tool.

We are going to use CodeBuild tool as our CodeDeploy tool to deploy our Terraform configurations in AWS,

or provision our infrastructure using Terraform.

We will also create a manual approval stage in the pipeline and a CodeBuild project for our deploy staging environment.

So here we are going to demonstrate for two environments but we can scale these to multiple environments accordingly.

As a developer, or as a Terraform configuration admin, I will check in all my files to the GitHub repository.

When we will make some change in the code and then push into GitHub repository. immediately CodePipeline will trigger,

and it'll complete the source stage and then it will move to the deploy stage.In deploy stage, it is going to create

a dev environment in AWS with all resources.

We are going to leverage all these single set of configuration files to create multiple environments excluding the Terraform related variables (dev.tfvars, stag.tfvars, terraform.tfvars).

so whenever we create dev environment, all these resources will be created and it is going to be devdemo1.devopsincloud.com.

image

We will see from internet that it will create a DNS record, It will create Application Load Balancer,

It will create a certificate manager for creating the SSL certificates.

It'll create the Auto Scaling groups with launch templates.

It will create the NAT gateway for VPC and then outbound communication.

It will create related IAM roles, then it will create the instances,

and then it will also create the Simple Notification Service for Auto Scaling group alerts.

All the these ressouces will be created in the dev deploy stage.

And then, the pipeline, once this is successful, moves to the manual approval stage.

Here the request will send, as an email notification to the respective manager who is provided in that SNS notification.

And that respective manager need to approve this respective email, so that it can move on to the next stage in the pipeline

Once approved then the staging environment will start getting created. And in staging environment also,

whatever the resources we have configured in the Terraform configurations, the same resources will be created.

it is going to be stagedemo1.devopsincloud.com and all these resources are going to be get created using the AWS CodePipeline.

image

The advantage of using AWS CodePipeline here is we will use only one version of the entire terraform manifests

template for dev environment and then staging environment.

To do so, we are going to create different stuff like dev.conf will reference the dev related Terraform state files,

image

and in the same way stag.conf will reference the staging related terraform.tfstate files.

image

So terraform.tfstate file access the underlying DynanaDB for the real resources whatever it is created in the cloud.

Which means all the information related to the resources created in the cloud using Terraform is stored inside this tfstate file.

For multiple environments, we are going to manage each environment state by using dev.conf and then stag.conf,

In addition to that, for dev environment, dev.tfvars related environmental variables will be there.

image

and for staging environment, stag.tfvars will be there,

image

and terraform.tfvars will be generic.

image

We are going to make many changes to all these TF configs files to support the multiple environments like dev, staging or production.

For that we are going to change the naming convention of all of our resources which will have the local.name appended for them.

so that that resource you can easily identify this belongs to this business division, hyphen, environment name, hyphen, and resource

name (BusinessDivision-EnvironmentName-ResourceName).

We are also going to create Build specification files.Buildspecdev.yml and buildspecstaging.yml related to dev and staging build

specification files to implement CodePipeline.

buildpec.dev.yml

image

buildpec.stag.yml

image

We will implement all these changes step by step.

Step-01: terraform-manifests

  • Update terraform-manifests by creating Autoscaling-with-Launch-Templates
  • Createprivate-key\terraform-key.pem with your private key with same name.

Step-02: c1-versions.tf - Terraform Backends

Step-02-01 Add backend block as below

  # Adding Backend as S3 for Remote State Storage
  backend "s3" { }  

Step-02-02: Create file named dev.conf

bucket = "terraform-on-aws-for-ec2"
key    = "iacdevops/dev/terraform.tfstate"
region = "us-east-1" 
dynamodb_table = "iacdevops-dev-tfstate" 

Step-02-03: Create file named stag.conf

bucket = "terraform-on-aws-for-ec2"
key    = "iacdevops/stag/terraform.tfstate"
region = "us-east-1" 
dynamodb_table = "iacdevops-stag-tfstate" 

Step-02-04: Create S3 Bucket related folders for both environments for Terraform State Storage

Go to Services -> S3 -> terraform-on-aws-for-ec2-demo1

  • Create Folder iacdevops

image

  • Create Folder iacdevops\dev

  • Create Folder iacdevops\stag

image

Step-02-05: Create DynamoDB Tables for Both Environments for Terraform State Locking

  • Create Dynamo DB Table for Dev Environment

  • Table Name: iacdevops-dev-tfstate

  • Partition key (Primary Key): LockID (Type as String)

  • Table settings: Use default settings (checked)

  • Click on Create

image

Create Dynamo DB Table for Staging Environment

  • Table Name: iacdevops-stag-tfstate

  • Partition key (Primary Key): LockID (Type as String)

  • Table settings: Use default settings (checked)

  • Click on Create

image

Step-03: Pipeline Build Out - Decisions

We have two options here.

Step-03-01: Option-1: Create separate folders per environment and have same TF Config files (c1 to c13) maintained per environment

More work as we need to manage many environment related configs

  • Dev - C1 to C13 - Approximate 30 files

  • QA - C1 to C13 - Approximate 30 files

  • Stg - C1 to C13 - Approximate 30 files

  • Prd - C1 to C13 - Approximate 30 files

  • DR - C1 to C13 - Approximate 30 files

  • Close to 150 files you need to manage changes.

For critical projects which you want to isolate as above, Terraform also recommends this approach but its all case to case basis on the

environment we have built, skill level and organization level standards.

Step-03-02: Option-2: Create only 1 folder and leverage same C1 to C13 files (approx 30 files) across environments.

Only 30 files to manage across Dev, QA, Staging, Production and DR environments.

  • We are going to take this option-2 and build the pipeline for Dev and Staging environments

image

Step-04: Merge vpc.auto.tfvars and ec2instance.auto.tfvars

  • Merge vpc.auto.tfvars and ec2instance.auto.tfvars to environment specific .tfvars example dev.tfvars and stag.tfvats
  • Also we want to leverage same TF Config files across environments.
  • We are going to pass the .tfvars file as -var-file argument to terraform apply command
terraform apply -input=false -var-file=dev.tfvars -auto-approve  

Step-04-01: dev.tfvars

# Environment
environment = "dev"
# VPC Variables
vpc_name = "myvpc"
vpc_cidr_block = "10.0.0.0/16"
vpc_availability_zones = ["us-east-1a", "us-east-1b", "us-east-1c"]
vpc_public_subnets = ["10.0.101.0/24", "10.0.102.0/24", "10.0.103.0/24"]
vpc_private_subnets = ["10.0.1.0/24", "10.0.2.0/24", "10.0.3.0/24"]
vpc_database_subnets= ["10.0.151.0/24", "10.0.152.0/24", "10.0.153.0/24"]
vpc_create_database_subnet_group = true 
vpc_create_database_subnet_route_table = true   
vpc_enable_nat_gateway = true  
vpc_single_nat_gateway = true

# EC2 Instance Variables
instance_type = "t3.micro"
instance_keypair = "terraform-key"
private_instance_count = 2

Step-04-01: stag.tfvars

# Environment
environment = "stag"
# VPC Variables
vpc_name = "myvpc"
vpc_cidr_block = "10.0.0.0/16"
vpc_availability_zones = ["us-east-1a", "us-east-1b", "us-east-1c"]
vpc_public_subnets = ["10.0.101.0/24", "10.0.102.0/24", "10.0.103.0/24"]
vpc_private_subnets = ["10.0.1.0/24", "10.0.2.0/24", "10.0.3.0/24"]
vpc_database_subnets= ["10.0.151.0/24", "10.0.152.0/24", "10.0.153.0/24"]
vpc_create_database_subnet_group = true 
vpc_create_database_subnet_route_table = true   
vpc_enable_nat_gateway = true  
vpc_single_nat_gateway = true


# EC2 Instance Variables
instance_type = "t3.micro"
instance_keypair = "terraform-key"
private_instance_count = 2
  • Remove / Delete the following two files
    • vpc.auto.tfvars
    • ec2instance.auto.tfvars

Step-05: terraform.tfvars

  • terraform.tfvars which autoloads for all environment creations will have only generic variables.
# Generic Variables
aws_region = "us-east-1"
business_divsion = "hr"

Step-06: Provisioner "local-exec"

Step-06-01: c9-nullresource-provisioners.tf

  • Applicable in CodePipeline -> CodeBuild case.
 provisioner "local-exec" {
    command = "echo VPC created on `date` and VPC ID: ${module.vpc.vpc_id} >> creation-time-vpc-id.txt"
    working_dir = "local-exec-output-files/"
  }

Step-06-02: c8-elasticip.tf

  • Applicable in CodePipeline -> CodeBuild case.
  provisioner "local-exec" {
    command = "echo Destroy time prov `date` >> destroy-time-prov.txt"
    working_dir = "local-exec-output-files/"
    when = destroy
  }  

Step-07: To Support Multiple Environments

Step-07-01: c5-03-securitygroup-bastionsg.tf

# Append local.name to "public-bastion-sg"  
  name = "${local.name}-public-bastion-sg"

Step-07-02: c5-04-securitygroup-privatesg.tf

# Append local.name to "private-sg"
  name = "${local-name}-private-sg"  

Step-07-03: c5-05-securitygroup-loadbalancersg.tf

# Append local.name to "loadbalancer-sg"
  name = "${local.name}-loadbalancer-sg"  

Step-07-04: Create Variable for DNS Name to support multiple environments

Step-07-04-01: c12-route53-dnsregistration.tf

# DNS Name Input Variable
variable "dns_name" {
  description = "DNS Name to support multiple environments"
  type = string   
}

Step-07-04-02: c12-route53-dnsregistration.tf

# DNS Registration 
resource "aws_route53_record" "apps_dns" {
  zone_id = data.aws_route53_zone.mydomain.zone_id 
  name    = var.dns_name 
  type    = "A"
  alias {
    name                   = module.alb.lb_dns_name
    zone_id                = module.alb.lb_zone_id
    evaluate_target_health = true
  }  
}

In my case the domain names change in this step.

I create hosted zone "kalyandemo.com" in Route 53 console

image

Let's create dns name record

Step-07-04-03: dev.tfvars

# DNS Name
dns_name = "devdemo5.kalyandemo.com"

Step-07-04-04: stag.tfvars

# DNS Name
dns_name = "stagedemo5.kalyandemo.com"

Step-07-05: c11-acm-certificatemanager.tf

  subject_alternative_names = [
    #"*.kalyandemo.com"
    var.dns_name  
  ]

Step-07-06: c13-02-autoscaling-launchtemplate-resource.tf

# Append local.name to name_prefix
  name_prefix = "${local.name}-"

Step-07-07: c13-02-autoscaling-launchtemplate-resource.tf

# Append Name = local.name
  tag_specifications {
    resource_type = "instance"
    tags = {
      Name = local.name
    }
  }    

Step-07-08: c13-03-autoscaling-resource.tf

# Append local.name to name_prefix
  name_prefix = "${local.name}-"  

Step-07-09: c13-06-autoscaling-ttsp.tf

# Append local.name to name
  name = "${local.name}-avg-cpu-policy-greater-than-xx"
  name = "${local.name}-alb-target-requests-greater-than-yy"  

Step-08: Create Secure Parameters in Parameter Store

Step-08-01: Create MY_AWS_SECRET_ACCESS_KEY

  • Go to Services -> Systems Manager -> Application Management -> Parameter Store -> Create Parameter
    • Name: /CodeBuild/MY_AWS_ACCESS_KEY_ID
    • Descritpion: My AWS Access Key ID for Terraform CodePipeline Project
    • Tier: Standard
    • Type: Secure String
    • Rest all defaults
    • Value: ABCXXXXDEFXXXXGHXXX

Step-08-02: Create MY_AWS_SECRET_ACCESS_KEY

  • Go to Services -> Systems Manager -> Application Management -> Parameter Store -> Create Parameter
    • Name: /CodeBuild/MY_AWS_SECRET_ACCESS_KEY
    • Descritpion: My AWS Secret Access Key for Terraform CodePipeline Project
    • Tier: Standard
    • Type: Secure String
    • Rest all defaults
    • Value: abcdefxjkdklsa55dsjlkdjsakj

image

Step-09: buildspec-dev.yml

  • Discuss about following Environment variables we are going to pass
  • TERRAFORM_VERSION
    • which version of terraform codebuild should use
    • As on today 1.7.3 is latest we will use that
  • TF_COMMAND
    • We will use apply to create resources
    • We will use destroy in CodeBuild Environment
  • AWS_ACCESS_KEY_ID: /CodeBuild/MY_AWS_ACCESS_KEY_ID
    • AWS Access Key ID is safely stored in Parameter Store
  • AWS_SECRET_ACCESS_KEY: /CodeBuild/MY_AWS_SECRET_ACCESS_KEY
    • AWS Secret Access Key is safely stored in Parameter Store
version: 0.2

env:
  variables:
    TERRAFORM_VERSION: "1.7.3"
    TF_COMMAND: "apply"
    #TF_COMMAND: "destroy"
  parameter-store:
    AWS_ACCESS_KEY_ID: "/CodeBuild/MY_AWS_ACCESS_KEY_ID"
    AWS_SECRET_ACCESS_KEY: "/CodeBuild/MY_AWS_SECRET_ACCESS_KEY"

phases:
  install:
    runtime-versions:
      python: 3.7
    on-failure: ABORT       
    commands:
      - tf_version=$TERRAFORM_VERSION
      - wget https://releases.hashicorp.com/terraform/"$TERRAFORM_VERSION"/terraform_"$TERRAFORM_VERSION"_linux_amd64.zip
      - unzip terraform_"$TERRAFORM_VERSION"_linux_amd64.zip
      - mv terraform /usr/local/bin/
  pre_build:
    on-failure: ABORT     
    commands:
      - echo terraform execution started on `date`            
  build:
    on-failure: ABORT   
    commands:
    # Project-1: AWS VPC, ASG, ALB, Route53, ACM, Security Groups and SNS 
      - cd "$CODEBUILD_SRC_DIR/terraform-manifests"
      - ls -lrt "$CODEBUILD_SRC_DIR/terraform-manifests"
      - terraform --version
      - terraform init -input=false --backend-config=dev.conf
      - terraform validate
      - terraform plan -lock=false -input=false -var-file=dev.tfvars           
      - terraform $TF_COMMAND -input=false -var-file=dev.tfvars -auto-approve  
  post_build:
    on-failure: CONTINUE   
    commands:
      - echo terraform execution completed on `date`         

Step-10: buildspec-stag.yml

version: 0.2

env:
  variables:
    TERRAFORM_VERSION: "1.7.3"
    TF_COMMAND: "apply"
    #TF_COMMAND: "destroy"
  parameter-store:
    AWS_ACCESS_KEY_ID: "/CodeBuild/MY_AWS_ACCESS_KEY_ID"
    AWS_SECRET_ACCESS_KEY: "/CodeBuild/MY_AWS_SECRET_ACCESS_KEY"

phases:
  install:
    runtime-versions:
      python: 3.7
    on-failure: ABORT       
    commands:
      - tf_version=$TERRAFORM_VERSION
      - wget https://releases.hashicorp.com/terraform/"$TERRAFORM_VERSION"/terraform_"$TERRAFORM_VERSION"_linux_amd64.zip
      - unzip terraform_"$TERRAFORM_VERSION"_linux_amd64.zip
      - mv terraform /usr/local/bin/
  pre_build:
    on-failure: ABORT     
    commands:
      - echo terraform execution started on `date`            
  build:
    on-failure: ABORT   
    commands:
    # Project-1: AWS VPC, ASG, ALB, Route53, ACM, Security Groups and SNS 
      - cd "$CODEBUILD_SRC_DIR/terraform-manifests"
      - ls -lrt "$CODEBUILD_SRC_DIR/terraform-manifests"
      - terraform --version
      - terraform init -input=false --backend-config=stag.conf
      - terraform validate
      - terraform plan -lock=false -input=false -var-file=stag.tfvars           
      - terraform $TF_COMMAND -input=false -var-file=stag.tfvars -auto-approve  
  post_build:
    on-failure: CONTINUE   
    commands:
      - echo terraform execution completed on `date`             

Step-11: Create Github Repository and Check-In file

Step-11-01: Create New Github Repository

  • Go to github.com and login with your credentials

  • Click on Repositories Tab

  • Click on New to create a new repository

  • Repository Name: terraform-iacdevops-with-aws-codepipeline

  • Description: Implement Terraform IAC DevOps for AWS Project with AWS CodePipeline

  • Repository Type: Private

  • Choose License: Apache License 2.0

  • Click on Create Repository

  • Click on Code and Copy Repo link

    image

I create demo-repos folder in my local

Step-11-02: Clone Remote Repo and Copy all related files

image

# Change Directory
cd demo-repos

# Execute Git Clone
git clone https://github.com/felixdagnon/terraform-iacdevops-with-aws-codepipeline.git

# Verify Git Status
git status

# Git Commit
git commit -am "First Commit"

# Push files to Remote Repository
git push

# Verify same on Remote Repository
https://github.com/stacksimplify/terraform-iacdevops-with-aws-codepipeline.git

image

image

Let's check Github. The codes are uploaded on the repository

image

Step-12: Verify if AWS Connector for GitHub already installed on your Github

image

Step-13: Create Github Connection from AWS Developer Tools

  • Go to Services -> CodePipeline -> Create Pipeline

  • In Developer Tools -> Click on Settings -> Connections -> Create Connection

  • Select Provider: Github

  • Connection Name: terraform-iacdevops-aws-cp-con1

  • Click on Connect to Github

image

  • GitHub Apps: Click on Install new app

  • It should redirect to github page Install AWS Connector for GitHub

  • Only select repositories: terraform-iacdevops-with-aws-codepipeline

image

  • Click on save

    Redirect to AWS console

  • Click on Connect

    image

  • Verify Connection Status: It should be in Available state

image

image

Step-14: Create AWS CodePipeline

  • Go to Services -> CodePipeline -> Create Pipeline

Pipeline settings

  • Pipeline Name: tf-iacdevops-aws-cp1

  • Service role: New Service Role

  • rest all defaults

  • Artifact store: Default Location

  • Encryption Key: Default AWS Managed Key

  • Click Next

image

image

Source Stage

  • Source Provider: Github (Version 2)

  • Connection: terraform-iacdevops-aws-cp-con1

  • Repository name: terraform-iacdevops-with-aws-codepipeline

  • Branch name: main

  • Change detection options: leave to defaults as checked

  • Output artifact format: leave to defaults as CodePipeline default

image

image

Add Build Stage

  • Build Provider: AWS CodeBuild

  • Region: N.Virginia

  • Project Name: Click on Create Project

    image

    • Project Name: codebuild-tf-iacdevops-aws-cp1

    • Description: CodeBuild Project for Dev Stage of IAC DevOps Terraform Demo

image

  • Environment image: Managed Image

  • Operating System: Amazon Linux 2

  • Runtimes: Standard

    image

  • Image: latest available today (aws/codebuild/amazonlinux2-x86_64-standard:3.0)

  • Environment Type: Linux

  • Service Role: New (leave to defaults including Role Name)

    image

  • Build specifications: use a buildspec file

  • Buildspec name - optional: buildspec-dev.yml (Ensure that this file is present in root folder of your github repository)

    image

  • Rest all leave to defaults

  • Click on Continue to CodePipeline

image

  • Project Name: This value should be auto-populated with codebuild-tf-iacdevops-aws-cp1

  • Build Type: Single Build

  • Click Next

    image

Add Deploy Stage

  • Click on Skip Deploy Stage

image

Review Stage

  • Click on Create Pipeline

image

image

image

Step-15: Verify the Pipeline created

  • Verify Source Stage: Should pass

image

  • Verify Build Stage: should fail with error

image

  • Verify Build Stage logs by clicking on details in pipeline screen
[Container] 2021/05/11 06:24:06 Waiting for agent ping
[Container] 2021/05/11 06:24:09 Waiting for DOWNLOAD_SOURCE
[Container] 2021/05/11 06:24:09 Phase is DOWNLOAD_SOURCE
[Container] 2021/05/11 06:24:09 CODEBUILD_SRC_DIR=/codebuild/output/src851708532/src
[Container] 2021/05/11 06:24:09 YAML location is /codebuild/output/src851708532/src/buildspec-dev.yml
[Container] 2021/05/11 06:24:09 Processing environment variables
[Container] 2021/05/11 06:24:09 Decrypting parameter store environment variables
[Container] 2021/05/11 06:24:09 Phase complete: DOWNLOAD_SOURCE State: FAILED
[Container] 2021/05/11 06:24:09 Phase context status code: Decrypted Variables Error Message: AccessDeniedException: User: arn:aws:sts::180789647333:assumed-role/codebuild-codebuild-tf-iacdevops-aws-cp1-service-role/AWSCodeBuild-97595edc-1db1-4070-97a0-71fa862f0993 is not authorized to perform: ssm:GetParameters on resource: arn:aws:ssm:us-east-1:180789647333:parameter/CodeBuild/MY_AWS_ACCESS_KEY_ID

image

Step-16: Fix ssm:GetParameters IAM Role issues

Step-16-01: Get IAM Service Role used by CodeBuild Project

  • Get the IAM Service Role name CodeBuild Project is using

  • Go to CodeBuild -> codebuild-tf-iacdevops-aws-cp1 -> Edit -> Environment

  • Make a note of Service Role ARN

Here "codebuild-codebuild-tf-iacdevops-aws-cp1-service-role"

# CodeBuild Service Role ARN 
arn:aws:iam::180789647333:role/service-role/codebuild-codebuild-tf-iacdevops-aws-cp1-service-role

Step-16-02: Create IAM Policy with Systems Manager Get Parameter Read Permission

  • Go to Services -> IAM -> Policies -> Create Policy

  • Service: Systems Manager

  • Actions: Get Parameters (Under Read)

  • Resources: All

  • Click Next Tags

  • Click Next Review

  • Policy name: systems-manger-get-parameter-access

  • Policy Description: Read Parameters from Parameter Store in AWS Systems Manager Service

  • Click on Create Policy

image

Step-16-03: Associate this Policy to IAM Role

image

  • Go to Services -> IAM -> Roles -> Search for codebuild-codebuild-tf-iacdevops-aws-cp1-service-role

  • Attach the polic named systems-manger-get-parameter-access

image

Step-17: Re-run the CodePipeline

  • Go to Services -> CodePipeline -> tf-iacdevops-aws-cp1

  • Click on Release Change

image

  • Verify Source Stage:

    • Should pass
  • Verify Build Stage:

    • Verify Build Stage logs by clicking on details in pipeline screen

image

  • Verify Cloudwatch -> Log Groups logs too (Logs saved in CloudWatch for additional reference)

image

Let's check log event. It shows build and post build state succeeded

image

Step-18: Verify Resources

Let's verify ressoures

VPC is created and inside all ressources

image

CodeBuild created underline ressourrces for VPC. Let's verify codebuild log

image

The entire ressources are completed

Download phase complete

image

Intall and pre build phases completed

image

Initializing the backend...

Successfully configured the backend "s3"! Terraform will automatically use this backend.

image

" terraform.tfstate" is created in S3

image

Terraform has been successfully initialized.

Running command terraform validate, terraform plan, terraform apply

image

terraform plan completed

image

terraform apply completed

image

All ressources are completed in Dev

image

  1. Confirm SNS Subscription in your email

image

Subscription confirmed

image

SNS topic confirmed

image

  1. Verify EC2 Instances

"dev-BastionHost" and and 2 instances "hr-dev" are created and running

image

  1. Verify Launch Templates (High Level)

It's created the lunch template name "hr-dev-2024022709342795200000000b" because we added "prefix"+local.name

image

  1. Verify Autoscaling Group (High Level)

It's created the Autoscaling Group name "hr-dev-2024022709342795200000000b" because we added "prefix"+local.name

image

let's verify Target tracking policy

image

  1. Verify Load Balancer

Load Balancer name "hr-dev" created with listeners

image

Let's check certificate created

image

  1. Verify Load Balancer Target Group - Health Checks

Target groups are Healthy because the instance running in availabities zones are Healthy.

image

  1. Access and Test
# Access and Test
https://devdemo5.kalyandemo.com
https://devdemo5.kalyandemo.com/app1/index.html
https://devdemo5.kalyandemo.com/app1/metadata.html

Let's first verify Route 53. The record name "devdemo5.kalyandemo.com" created

image

Copy this link "devdemo5.kalyandemo.com" and paste url

image

We obtain https connexion secure

image

Click on to view the certificate complete information

image

The test is valid

image

Matadata informations are completed. Two instances related private ip are refreshed. The load balancer is working

The first availably zone

image

The second availably zone

image

Step-19: Add Approval Stage before deploying to staging environment

  • Go to Services -> AWS CodePipeline -> tf-iacdevops-aws-cp1 -> Edit

image

Add Stage

  • Name: Email-Approval

Add Action Group

image

  • Action Name: Email-Approval

image

  • Action Group

    image

  • Action Provider: Manual Approval

  • SNS Topic: Select SNS Topic from drop down

  • Comments: Approve to deploy to staging environment

image

Step-20: Add Staging Environment Deploy Stage

  • Go to Services -> AWS CodePipeline -> tf-iacdevops-aws-cp1 -> Edit

Add Stage

image

  • Name: Stage-Deploy

image

Add Action Group

image

  • Action Name: Stage-Deploy

  • Region: US East (N.Virginia)

  • Action Provider: AWS CodeBuild

  • Input Artifacts: Source Artifact

image

  • Project Name: Click on Create Project

    • Project Name: stage-deploy-IACDEVOPS-CB

    • Description: CodeBuild Project for Staging Environment of IAC DevOps Terraform Demo

image

  • Environment image: Managed Image

  • Operating System: Amazon Linux 2

  • Runtimes: Standard

  • Image: latest available today (aws/codebuild/amazonlinux2-x86_64-standard:3.0)

  • Environment Type: Linux

image

  • Service Role: New (leave to defaults including Role Name)

  • Build specifications: use a buildspec file

  • Buildspec name - optional: buildspec-stag.yml (Ensure that this file is present in root folder of your github repository)

image

  • Rest all leave to defaults

image

  • Click on Continue to CodePipeline

  • Project Name: This value should be auto-populated with stage-deploy-tf-iacdevops-aws-cp1

  • Build Type: Single Build

  • Click on Done

image

  • Review Edit Action

image

  • Click on Save

image

  • Now we add "Manual approval" and "Stage-Deploy"

image

Step-21: Update the IAM Role

Let's search this role "codebuild-stage-deploy-IACDEVOPS-CB-service-role" in IAM role service

image

  • Update the IAM Role created as part of this stage-deploy-tf-iacdevops-aws-cp1 CodeBuild project by adding the policy systems-manger-get-parameter-access1

image

Step-22: Run the Pipeline

  • Go to Services -> AWS CodePipeline -> tf-iacdevops-aws-cp1

  • Click on Release Change

  • Verify Source Stage

  • Verify Build Stage (Dev Environment - Dev Depploy phase)

  • Verify Manual Approval Stage - Approve the change

image

Let's go to email to approve

image

Let's consult email

image

Review Approval Stage of pipeline

image

Review

image

Approval Stage of pipeline succeeded

image

  • Verify Stage Deploy Stage

Let's check log event.

image

Let's verify ressoures

VPC is created and inside all ressources

image

CodeBuild created underline ressourrces for VPC.

Let's verify codebuild log

It shows build and post build state succeeded

image

  • Verify build logs

The entire ressources are completed

Download phase complete

image

Intall and pre build phases completed

image

Initializing the backend...

Successfully configured the backend "s3"! Terraform will automatically use this backend.

image

" terraform.tfstate" is created in S3

image

Terraform has been successfully initialized.

Running command terraform validate, terraform plan, terraform apply

image

terraform plan completed

image

terraform apply completed

image

All ressources are completed in Stage environment

image

Pipeline looking

image

Source and build phases succeeded

image

Email-Approval and Stage-Deploy phases succeeded

image

Step-23: Verify Staging Environment

  1. Confirm SNS Subscription in your email

Notification by email

image

Confirmation subscription

image

Let's verify SNS Topic

image

  1. Verify EC2 Instances

"stag-BastionHost" and and 2 instances "hr-stag" are created and running

image

  1. Verify Launch Templates (High Level)

It's created the lunch template name "hr-stag-2024022810181677970000000c" because we added "prefix"+local.name

image

  1. Verify Autoscaling Group (High Level)

It's created the Autoscaling Group name "hr-stag-2024022810181768120000000e" because we added "prefix"+local.name

image

let's verify Target tracking policy

image

  1. Verify Load Balancer

Load Balancer name "hr-stag" created with listeners

image

Let's check certificate created

image

  1. Verify Load Balancer Target Group - Health Checks

Target groups are Healthy because the instance running in availabities zones are Healthy.

image

  1. Access and Test
# Access and Test
https://stagedemo5.kalyandemo.com
https://stagedemo5.kalyandemo.com/app1/index.html
https://stagedemo5.kalyandemo.com/app1/metadata.html

Let's first verify Route 53. The record name "stagdemo5.kalyandemo.com" created

image

Copy this link "stagdemo5.kalyandemo.com" and paste url

image

We obtain https connexion secure

image

Click on to view the certificate complete information

image

The test is valid

image

Matadata informations are completed. Two instances related private ip are refreshed. The load balancer is working

The first availably zone

image

The second availably zone

image

Step-24: Make a change and test the entire pipeline

Step-24-01: c13-03-autoscaling-resource.tf

  • Increase minimum EC2 Instances from 2 to 3
# Before
  desired_capacity = 2
  max_size = 10
  min_size = 2
# After
  desired_capacity = 4
  max_size = 10
  min_size = 4

image

Step-24-02: Commit Changes via Git Repo

# Verify Changes
git status
# Commit Changes to Local Repository
git add .
git commit -am "ASG Min Size from 2 to 4"

# Push changes to Remote Repository
git push

image

Step-24-03: Review Build Logs

  • Go to Services -> CodePipeline -> tf-iacdevops-aws-cp1

Source and build phases succeeded

image

  • Verify Dev Deploy Logs

Autosclaling group modified in dev and increase capacity

image

  • Approve at Manual Approval stage

image

Email-Approval and Stage-Deploy phases succeeded

image

  • Verify Stage Deploy Logs

Autosclaling group modified in stage environement 2 ---> 4

image

Deployment complete succeeded

image

Step-24-04: Verify EC2 Instances

  • Go to Services -> EC2 Instances

  • Newly created instances should be visible.

  • hr-dev: 4 EC2 Instances

Let's verify instances

The number of instance increase to 4.

image

Let's check the target groups

image

  • hr-stag: 4 EC2 Instances

The number of instance increase to 4.

image

Let's check the target groups

image

let's verify autoscaling group

image

let's verify autoscaling load balancer

image

Step-25: Destroy Resources

Step-25-01: Update buildspec-dev.yml

# Before
    TF_COMMAND: "apply"
    #TF_COMMAND: "destroy"
# After
    #TF_COMMAND: "apply"
    TF_COMMAND: "destroy"    

image

Step-25-02: Update buildspec-stag.yml

# Before
    TF_COMMAND: "apply"
    #TF_COMMAND: "destroy"
# After
    #TF_COMMAND: "apply"
    TF_COMMAND: "destroy"    

image

Step-25-03: Commit Changes via Git Repo

# Verify Changes
git status

# Commit Changes to Local Repository
git add .
git commit -am "Destroy Resources"

# Push changes to Remote Repository
git push

image

Step-25-03: Review Build Logs

  • Go to Services -> CodePipeline -> tf-iacdevops-aws-cp1

  • Verify Dev Deploy Logs

    All ressouces are destroying

image

Let's verivy pipeline

image

  • Approve at Manual Approval stage

image

Edit pipeline in approval stage

image

Change SNS Topic

image

Release pipeline

Reveiw Email-Approval stage and approve

image

All ressources in Stage-Deploy are destroyed

image

  • Verify Stage Deploy Logs

image

  • Pipeline looking

image

All phases completed

image

  • Let's check ressources

All EC2 are terminated

image

Step-26: Change Everything back to original Demo State

Step-26-01: c13-03-autoscaling-resource.tf

  • Change them back to original state
# Before
  desired_capacity = 4
  max_size = 10
  min_size = 4
# After
  desired_capacity = 2
  max_size = 10
  min_size = 2

Step-26-02: buildspec-dev.yml and buildspec-stag.yml

  • Change them back to original state
# Before
    #TF_COMMAND: "apply"
    TF_COMMAND: "destroy"   
# After
    TF_COMMAND: "apply"
    #TF_COMMAND: "destroy"     

Step-26-03: Commit Changes via Git Repo

# Verify Changes
git status

# Commit Changes to Local Repository
git add .
git commit -am "Fixed all the changes back to demo state"

# Push changes to Remote Repository
git push

References

About

Devops on AWS CI/CD defined through Terraform using CodePipeline, CodeCommit CodeBuild

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published