As far as code functionality, my piece handles the following requirements:
-
Provision Permission Sets with Access keys into the AWS Account that hosts the AWS IAM service;
-
Link these Permission Sets to Groups and AWS Accounts;
-
CI/CD to deploy the Permission Sets automatically when a new set or Group/Account link is configured and pushed to the code repository;
-
the code translates into infrastructure which will be built by running terraform commands.
Users assume Roles assigned to Groups that they are part of, which, in turn, are assigned to AWS Accounts.
Permission Sets dictate the level of access a User has.
Different teams will have different Permission Sets assigned for different Accounts.
The challenge comes from making this process a repeatable one with as little human interaction as possible,
and the resptective manager approve manually by Email before deploying to next environement ( staging or production).
Let focus on CI/CD components from the diagram above.
AWS provides solutions for all of my needs:
-
AWS CodeCommit acts as a git server, providing repositories to store code and allows interaction with your code through the git cli;
-
AWS CodeBuild acts as a Build environment/engine and is used to execute instructions in multiple stages,
to build and pack code in the shape of a usable artifact;
-
AWS CodePipeline is the CI/CD framework that links the other two services together (as well as others) through executable stages;
-
AWS S3 to keep any artifacts that result out of a successful Build Stage, for later use and posterity.
Before delving into details, let’s first take a look at the picture.
We are going to focus on IaC DevOps with AWS CodePipeline to implemente VPC with WebApp and DB tiers, EC2 instances,
Bastion host and Security groups, Application Load Balancer, and also Auto Scaling with Launch Templates.
All these resources will be implement with Terraform using TF Config files for all the infrastructure supposed to build.
First create GitHub repository and then let’s check into our GitHub repository the Terraform manifests related to our AWS use case.
The next step, we will create AWS CodePipeline. So as part of that, when we are creating the CodePipeline, we will reference the
source as our GitHub repository we have created.
We will create a CodeBuild project for dev environment in deploy stage.
In this overall implementation, we are not going to see any deploy stage with CodeDeploy or any other tools provided by AWS,
because AWS doesn't have any tools related to pipeline to deploy the Terraform configurations.
For that purpose, we need to leverage the AWS CodeBuild tool as our deploy tool.
We are going to use CodeBuild tool as our CodeDeploy tool to deploy our Terraform configurations in AWS,
or provision our infrastructure using Terraform.
We will also create a manual approval stage in the pipeline and a CodeBuild project for our deploy staging environment.
So here we are going to demonstrate for two environments but we can scale these to multiple environments accordingly.
As a developer, or as a Terraform configuration admin, I will check in all my files to the GitHub repository.
When we will make some change in the code and then push into GitHub repository. immediately CodePipeline will trigger,
and it'll complete the source stage and then it will move to the deploy stage.In deploy stage, it is going to create
a dev environment in AWS with all resources.
We are going to leverage all these single set of configuration files to create multiple environments excluding the Terraform related variables (dev.tfvars, stag.tfvars, terraform.tfvars).
so whenever we create dev environment, all these resources will be created and it is going to be devdemo1.devopsincloud.com.
We will see from internet that it will create a DNS record, It will create Application Load Balancer,
It will create a certificate manager for creating the SSL certificates.
It'll create the Auto Scaling groups with launch templates.
It will create the NAT gateway for VPC and then outbound communication.
It will create related IAM roles, then it will create the instances,
and then it will also create the Simple Notification Service for Auto Scaling group alerts.
All the these ressouces will be created in the dev deploy stage.
And then, the pipeline, once this is successful, moves to the manual approval stage.
Here the request will send, as an email notification to the respective manager who is provided in that SNS notification.
And that respective manager need to approve this respective email, so that it can move on to the next stage in the pipeline
Once approved then the staging environment will start getting created. And in staging environment also,
whatever the resources we have configured in the Terraform configurations, the same resources will be created.
it is going to be stagedemo1.devopsincloud.com and all these resources are going to be get created using the AWS CodePipeline.
The advantage of using AWS CodePipeline here is we will use only one version of the entire terraform manifests
template for dev environment and then staging environment.
To do so, we are going to create different stuff like dev.conf will reference the dev related Terraform state files,
and in the same way stag.conf will reference the staging related terraform.tfstate files.
So terraform.tfstate file access the underlying DynanaDB for the real resources whatever it is created in the cloud.
Which means all the information related to the resources created in the cloud using Terraform is stored inside this tfstate file.
For multiple environments, we are going to manage each environment state by using dev.conf and then stag.conf,
In addition to that, for dev environment, dev.tfvars related environmental variables will be there.
and for staging environment, stag.tfvars will be there,
and terraform.tfvars will be generic.
We are going to make many changes to all these TF configs files to support the multiple environments like dev, staging or production.
For that we are going to change the naming convention of all of our resources which will have the local.name appended for them.
so that that resource you can easily identify this belongs to this business division, hyphen, environment name, hyphen, and resource
name (BusinessDivision-EnvironmentName-ResourceName).
We are also going to create Build specification files.Buildspecdev.yml and buildspecstaging.yml related to dev and staging build
specification files to implement CodePipeline.
buildpec.dev.yml
buildpec.stag.yml
We will implement all these changes step by step.
- Update
terraform-manifestsby creatingAutoscaling-with-Launch-Templates - Create
private-key\terraform-key.pemwith your private key with same name.
# Adding Backend as S3 for Remote State Storage
backend "s3" { } bucket = "terraform-on-aws-for-ec2"
key = "iacdevops/dev/terraform.tfstate"
region = "us-east-1"
dynamodb_table = "iacdevops-dev-tfstate" bucket = "terraform-on-aws-for-ec2"
key = "iacdevops/stag/terraform.tfstate"
region = "us-east-1"
dynamodb_table = "iacdevops-stag-tfstate" Go to Services -> S3 -> terraform-on-aws-for-ec2-demo1
- Create Folder iacdevops
-
Create Folder iacdevops\dev
-
Create Folder iacdevops\stag
-
Create Dynamo DB Table for Dev Environment
-
Table Name: iacdevops-dev-tfstate
-
Partition key (Primary Key): LockID (Type as String)
-
Table settings: Use default settings (checked)
-
Click on Create
-
Table Name: iacdevops-stag-tfstate
-
Partition key (Primary Key): LockID (Type as String)
-
Table settings: Use default settings (checked)
-
Click on Create
We have two options here.
Step-03-01: Option-1: Create separate folders per environment and have same TF Config files (c1 to c13) maintained per environment
More work as we need to manage many environment related configs
-
Dev - C1 to C13 - Approximate 30 files
-
QA - C1 to C13 - Approximate 30 files
-
Stg - C1 to C13 - Approximate 30 files
-
Prd - C1 to C13 - Approximate 30 files
-
DR - C1 to C13 - Approximate 30 files
-
Close to 150 files you need to manage changes.
For critical projects which you want to isolate as above, Terraform also recommends this approach but its all case to case basis on the
environment we have built, skill level and organization level standards.
Step-03-02: Option-2: Create only 1 folder and leverage same C1 to C13 files (approx 30 files) across environments.
Only 30 files to manage across Dev, QA, Staging, Production and DR environments.
- We are going to take this
option-2and build the pipeline for Dev and Staging environments
- Merge
vpc.auto.tfvarsandec2instance.auto.tfvarsto environment specific.tfvarsexampledev.tfvarsandstag.tfvats - Also we want to leverage same TF Config files across environments.
- We are going to pass the
.tfvarsfile as-var-fileargument toterraform applycommand
terraform apply -input=false -var-file=dev.tfvars -auto-approve # Environment
environment = "dev"
# VPC Variables
vpc_name = "myvpc"
vpc_cidr_block = "10.0.0.0/16"
vpc_availability_zones = ["us-east-1a", "us-east-1b", "us-east-1c"]
vpc_public_subnets = ["10.0.101.0/24", "10.0.102.0/24", "10.0.103.0/24"]
vpc_private_subnets = ["10.0.1.0/24", "10.0.2.0/24", "10.0.3.0/24"]
vpc_database_subnets= ["10.0.151.0/24", "10.0.152.0/24", "10.0.153.0/24"]
vpc_create_database_subnet_group = true
vpc_create_database_subnet_route_table = true
vpc_enable_nat_gateway = true
vpc_single_nat_gateway = true
# EC2 Instance Variables
instance_type = "t3.micro"
instance_keypair = "terraform-key"
private_instance_count = 2# Environment
environment = "stag"
# VPC Variables
vpc_name = "myvpc"
vpc_cidr_block = "10.0.0.0/16"
vpc_availability_zones = ["us-east-1a", "us-east-1b", "us-east-1c"]
vpc_public_subnets = ["10.0.101.0/24", "10.0.102.0/24", "10.0.103.0/24"]
vpc_private_subnets = ["10.0.1.0/24", "10.0.2.0/24", "10.0.3.0/24"]
vpc_database_subnets= ["10.0.151.0/24", "10.0.152.0/24", "10.0.153.0/24"]
vpc_create_database_subnet_group = true
vpc_create_database_subnet_route_table = true
vpc_enable_nat_gateway = true
vpc_single_nat_gateway = true
# EC2 Instance Variables
instance_type = "t3.micro"
instance_keypair = "terraform-key"
private_instance_count = 2- Remove / Delete the following two files
- vpc.auto.tfvars
- ec2instance.auto.tfvars
terraform.tfvarswhich autoloads for all environment creations will have only generic variables.
# Generic Variables
aws_region = "us-east-1"
business_divsion = "hr"- Applicable in CodePipeline -> CodeBuild case.
provisioner "local-exec" {
command = "echo VPC created on `date` and VPC ID: ${module.vpc.vpc_id} >> creation-time-vpc-id.txt"
working_dir = "local-exec-output-files/"
}- Applicable in CodePipeline -> CodeBuild case.
provisioner "local-exec" {
command = "echo Destroy time prov `date` >> destroy-time-prov.txt"
working_dir = "local-exec-output-files/"
when = destroy
} # Append local.name to "public-bastion-sg"
name = "${local.name}-public-bastion-sg"# Append local.name to "private-sg"
name = "${local-name}-private-sg" # Append local.name to "loadbalancer-sg"
name = "${local.name}-loadbalancer-sg" # DNS Name Input Variable
variable "dns_name" {
description = "DNS Name to support multiple environments"
type = string
}# DNS Registration
resource "aws_route53_record" "apps_dns" {
zone_id = data.aws_route53_zone.mydomain.zone_id
name = var.dns_name
type = "A"
alias {
name = module.alb.lb_dns_name
zone_id = module.alb.lb_zone_id
evaluate_target_health = true
}
}In my case the domain names change in this step.
I create hosted zone "kalyandemo.com" in Route 53 console
Let's create dns name record
# DNS Name
dns_name = "devdemo5.kalyandemo.com"# DNS Name
dns_name = "stagedemo5.kalyandemo.com" subject_alternative_names = [
#"*.kalyandemo.com"
var.dns_name
]# Append local.name to name_prefix
name_prefix = "${local.name}-"# Append Name = local.name
tag_specifications {
resource_type = "instance"
tags = {
Name = local.name
}
} # Append local.name to name_prefix
name_prefix = "${local.name}-" # Append local.name to name
name = "${local.name}-avg-cpu-policy-greater-than-xx"
name = "${local.name}-alb-target-requests-greater-than-yy" - Go to Services -> Systems Manager -> Application Management -> Parameter Store -> Create Parameter
- Name: /CodeBuild/MY_AWS_ACCESS_KEY_ID
- Descritpion: My AWS Access Key ID for Terraform CodePipeline Project
- Tier: Standard
- Type: Secure String
- Rest all defaults
- Value: ABCXXXXDEFXXXXGHXXX
- Go to Services -> Systems Manager -> Application Management -> Parameter Store -> Create Parameter
- Name: /CodeBuild/MY_AWS_SECRET_ACCESS_KEY
- Descritpion: My AWS Secret Access Key for Terraform CodePipeline Project
- Tier: Standard
- Type: Secure String
- Rest all defaults
- Value: abcdefxjkdklsa55dsjlkdjsakj
- Discuss about following Environment variables we are going to pass
- TERRAFORM_VERSION
- which version of terraform codebuild should use
- As on today
1.7.3is latest we will use that
- TF_COMMAND
- We will use
applyto create resources - We will use
destroyin CodeBuild Environment
- We will use
- AWS_ACCESS_KEY_ID: /CodeBuild/MY_AWS_ACCESS_KEY_ID
- AWS Access Key ID is safely stored in Parameter Store
- AWS_SECRET_ACCESS_KEY: /CodeBuild/MY_AWS_SECRET_ACCESS_KEY
- AWS Secret Access Key is safely stored in Parameter Store
version: 0.2
env:
variables:
TERRAFORM_VERSION: "1.7.3"
TF_COMMAND: "apply"
#TF_COMMAND: "destroy"
parameter-store:
AWS_ACCESS_KEY_ID: "/CodeBuild/MY_AWS_ACCESS_KEY_ID"
AWS_SECRET_ACCESS_KEY: "/CodeBuild/MY_AWS_SECRET_ACCESS_KEY"
phases:
install:
runtime-versions:
python: 3.7
on-failure: ABORT
commands:
- tf_version=$TERRAFORM_VERSION
- wget https://releases.hashicorp.com/terraform/"$TERRAFORM_VERSION"/terraform_"$TERRAFORM_VERSION"_linux_amd64.zip
- unzip terraform_"$TERRAFORM_VERSION"_linux_amd64.zip
- mv terraform /usr/local/bin/
pre_build:
on-failure: ABORT
commands:
- echo terraform execution started on `date`
build:
on-failure: ABORT
commands:
# Project-1: AWS VPC, ASG, ALB, Route53, ACM, Security Groups and SNS
- cd "$CODEBUILD_SRC_DIR/terraform-manifests"
- ls -lrt "$CODEBUILD_SRC_DIR/terraform-manifests"
- terraform --version
- terraform init -input=false --backend-config=dev.conf
- terraform validate
- terraform plan -lock=false -input=false -var-file=dev.tfvars
- terraform $TF_COMMAND -input=false -var-file=dev.tfvars -auto-approve
post_build:
on-failure: CONTINUE
commands:
- echo terraform execution completed on `date` version: 0.2
env:
variables:
TERRAFORM_VERSION: "1.7.3"
TF_COMMAND: "apply"
#TF_COMMAND: "destroy"
parameter-store:
AWS_ACCESS_KEY_ID: "/CodeBuild/MY_AWS_ACCESS_KEY_ID"
AWS_SECRET_ACCESS_KEY: "/CodeBuild/MY_AWS_SECRET_ACCESS_KEY"
phases:
install:
runtime-versions:
python: 3.7
on-failure: ABORT
commands:
- tf_version=$TERRAFORM_VERSION
- wget https://releases.hashicorp.com/terraform/"$TERRAFORM_VERSION"/terraform_"$TERRAFORM_VERSION"_linux_amd64.zip
- unzip terraform_"$TERRAFORM_VERSION"_linux_amd64.zip
- mv terraform /usr/local/bin/
pre_build:
on-failure: ABORT
commands:
- echo terraform execution started on `date`
build:
on-failure: ABORT
commands:
# Project-1: AWS VPC, ASG, ALB, Route53, ACM, Security Groups and SNS
- cd "$CODEBUILD_SRC_DIR/terraform-manifests"
- ls -lrt "$CODEBUILD_SRC_DIR/terraform-manifests"
- terraform --version
- terraform init -input=false --backend-config=stag.conf
- terraform validate
- terraform plan -lock=false -input=false -var-file=stag.tfvars
- terraform $TF_COMMAND -input=false -var-file=stag.tfvars -auto-approve
post_build:
on-failure: CONTINUE
commands:
- echo terraform execution completed on `date` -
Go to github.com and login with your credentials
-
Click on Repositories Tab
-
Click on New to create a new repository
-
Repository Name: terraform-iacdevops-with-aws-codepipeline
-
Description: Implement Terraform IAC DevOps for AWS Project with AWS CodePipeline
-
Repository Type: Private
-
Choose License: Apache License 2.0
-
Click on Create Repository
-
Click on Code and Copy Repo link
I create demo-repos folder in my local
# Change Directory
cd demo-repos
# Execute Git Clone
git clone https://github.com/felixdagnon/terraform-iacdevops-with-aws-codepipeline.git
# Verify Git Status
git status
# Git Commit
git commit -am "First Commit"
# Push files to Remote Repository
git push
# Verify same on Remote Repository
https://github.com/stacksimplify/terraform-iacdevops-with-aws-codepipeline.gitLet's check Github. The codes are uploaded on the repository
-
Go to below url and verify
-
Go to Services -> CodePipeline -> Create Pipeline
-
In Developer Tools -> Click on Settings -> Connections -> Create Connection
-
Select Provider: Github
-
Connection Name: terraform-iacdevops-aws-cp-con1
-
Click on Connect to Github
-
GitHub Apps: Click on Install new app
-
It should redirect to github page
Install AWS Connector for GitHub -
Only select repositories: terraform-iacdevops-with-aws-codepipeline
-
Click on save
Redirect to AWS console
-
Click on Connect
-
Verify Connection Status: It should be in Available state
-
Go to below url and verify
-
You should see
Install AWS Connector for GitHubapp installed
- Go to Services -> CodePipeline -> Create Pipeline
-
Pipeline Name: tf-iacdevops-aws-cp1
-
Service role: New Service Role
-
rest all defaults
-
Artifact store: Default Location
-
Encryption Key: Default AWS Managed Key
-
Click Next
-
Source Provider: Github (Version 2)
-
Connection: terraform-iacdevops-aws-cp-con1
-
Repository name: terraform-iacdevops-with-aws-codepipeline
-
Branch name: main
-
Change detection options: leave to defaults as checked
-
Output artifact format: leave to defaults as
CodePipeline default
-
Build Provider: AWS CodeBuild
-
Region: N.Virginia
-
Project Name: Click on Create Project
-
Project Name: codebuild-tf-iacdevops-aws-cp1
-
Description: CodeBuild Project for Dev Stage of IAC DevOps Terraform Demo
-
-
Environment image: Managed Image
-
Operating System: Amazon Linux 2
-
Runtimes: Standard
-
Image: latest available today (aws/codebuild/amazonlinux2-x86_64-standard:3.0)
-
Environment Type: Linux
-
Service Role: New (leave to defaults including Role Name)
-
Build specifications: use a buildspec file
-
Buildspec name - optional: buildspec-dev.yml (Ensure that this file is present in root folder of your github repository)
-
Rest all leave to defaults
-
Click on Continue to CodePipeline
-
Project Name: This value should be auto-populated with
codebuild-tf-iacdevops-aws-cp1 -
Build Type: Single Build
-
Click Next
- Click on Skip Deploy Stage
- Click on Create Pipeline
- Verify Source Stage: Should pass
- Verify Build Stage: should fail with error
- Verify Build Stage logs by clicking on details in pipeline screen
[Container] 2021/05/11 06:24:06 Waiting for agent ping
[Container] 2021/05/11 06:24:09 Waiting for DOWNLOAD_SOURCE
[Container] 2021/05/11 06:24:09 Phase is DOWNLOAD_SOURCE
[Container] 2021/05/11 06:24:09 CODEBUILD_SRC_DIR=/codebuild/output/src851708532/src
[Container] 2021/05/11 06:24:09 YAML location is /codebuild/output/src851708532/src/buildspec-dev.yml
[Container] 2021/05/11 06:24:09 Processing environment variables
[Container] 2021/05/11 06:24:09 Decrypting parameter store environment variables
[Container] 2021/05/11 06:24:09 Phase complete: DOWNLOAD_SOURCE State: FAILED
[Container] 2021/05/11 06:24:09 Phase context status code: Decrypted Variables Error Message: AccessDeniedException: User: arn:aws:sts::180789647333:assumed-role/codebuild-codebuild-tf-iacdevops-aws-cp1-service-role/AWSCodeBuild-97595edc-1db1-4070-97a0-71fa862f0993 is not authorized to perform: ssm:GetParameters on resource: arn:aws:ssm:us-east-1:180789647333:parameter/CodeBuild/MY_AWS_ACCESS_KEY_ID
-
Get the IAM Service Role name CodeBuild Project is using
-
Go to CodeBuild -> codebuild-tf-iacdevops-aws-cp1 -> Edit -> Environment
-
Make a note of Service Role ARN
Here "codebuild-codebuild-tf-iacdevops-aws-cp1-service-role"
# CodeBuild Service Role ARN
arn:aws:iam::180789647333:role/service-role/codebuild-codebuild-tf-iacdevops-aws-cp1-service-role-
Go to Services -> IAM -> Policies -> Create Policy
-
Service: Systems Manager
-
Actions: Get Parameters (Under Read)
-
Resources: All
-
Click Next Tags
-
Click Next Review
-
Policy name: systems-manger-get-parameter-access
-
Policy Description: Read Parameters from Parameter Store in AWS Systems Manager Service
-
Click on Create Policy
-
Go to Services -> IAM -> Roles -> Search for
codebuild-codebuild-tf-iacdevops-aws-cp1-service-role -
Attach the polic named
systems-manger-get-parameter-access
-
Go to Services -> CodePipeline -> tf-iacdevops-aws-cp1
-
Click on Release Change
-
Verify Source Stage:
- Should pass
-
Verify Build Stage:
- Verify Build Stage logs by clicking on details in pipeline screen
- Verify
Cloudwatch -> Log Groupslogs too (Logs saved in CloudWatch for additional reference)
Let's check log event. It shows build and post build state succeeded
Let's verify ressoures
CodeBuild created underline ressourrces for VPC. Let's verify codebuild log
The entire ressources are completed
Download phase complete
Intall and pre build phases completed
Initializing the backend...
Successfully configured the backend "s3"! Terraform will automatically use this backend.
" terraform.tfstate" is created in S3
Terraform has been successfully initialized.
Running command terraform validate, terraform plan, terraform apply
terraform plan completed
terraform apply completed
All ressources are completed in Dev
- Confirm SNS Subscription in your email
Subscription confirmed
SNS topic confirmed
- Verify EC2 Instances
"dev-BastionHost" and and 2 instances "hr-dev" are created and running
- Verify Launch Templates (High Level)
It's created the lunch template name "hr-dev-2024022709342795200000000b" because we added "prefix"+local.name
- Verify Autoscaling Group (High Level)
It's created the Autoscaling Group name "hr-dev-2024022709342795200000000b" because we added "prefix"+local.name
let's verify Target tracking policy
- Verify Load Balancer
Load Balancer name "hr-dev" created with listeners
Let's check certificate created
- Verify Load Balancer Target Group - Health Checks
Target groups are Healthy because the instance running in availabities zones are Healthy.
- Access and Test
# Access and Test
https://devdemo5.kalyandemo.com
https://devdemo5.kalyandemo.com/app1/index.html
https://devdemo5.kalyandemo.com/app1/metadata.htmlAccess and Test "https://devdemo5.kalyandemo.com"
Let's first verify Route 53. The record name "devdemo5.kalyandemo.com" created
Copy this link "devdemo5.kalyandemo.com" and paste url
We obtain https connexion secure
Click on to view the certificate complete information
Access and Test "https://devdemo5.kalyandemo.com/app1/index.html"
The test is valid
Access and Test "https://devdemo5.kalyandemo.com/app1/metadata.html"
Matadata informations are completed. Two instances related private ip are refreshed. The load balancer is working
The first availably zone
The second availably zone
- Go to Services -> AWS CodePipeline -> tf-iacdevops-aws-cp1 -> Edit
- Name: Email-Approval
- Action Name: Email-Approval
-
Action Group
-
Action Provider: Manual Approval
-
SNS Topic: Select SNS Topic from drop down
-
Comments: Approve to deploy to staging environment
- Go to Services -> AWS CodePipeline -> tf-iacdevops-aws-cp1 -> Edit
- Name: Stage-Deploy
-
Action Name: Stage-Deploy
-
Region: US East (N.Virginia)
-
Action Provider: AWS CodeBuild
-
Input Artifacts: Source Artifact
-
Project Name: Click on Create Project
-
Project Name: stage-deploy-IACDEVOPS-CB
-
Description: CodeBuild Project for Staging Environment of IAC DevOps Terraform Demo
-
-
Environment image: Managed Image
-
Operating System: Amazon Linux 2
-
Runtimes: Standard
-
Image: latest available today (aws/codebuild/amazonlinux2-x86_64-standard:3.0)
-
Environment Type: Linux
-
Service Role: New (leave to defaults including Role Name)
-
Build specifications: use a buildspec file
-
Buildspec name - optional: buildspec-stag.yml (Ensure that this file is present in root folder of your github repository)
- Rest all leave to defaults
-
Click on Continue to CodePipeline
-
Project Name: This value should be auto-populated with
stage-deploy-tf-iacdevops-aws-cp1 -
Build Type: Single Build
-
Click on Done
- Review Edit Action
- Click on Save
- Now we add "Manual approval" and "Stage-Deploy"
Let's search this role "codebuild-stage-deploy-IACDEVOPS-CB-service-role" in IAM role service
- Update the IAM Role created as part of this
stage-deploy-tf-iacdevops-aws-cp1CodeBuild project by adding the policysystems-manger-get-parameter-access1
-
Go to Services -> AWS CodePipeline -> tf-iacdevops-aws-cp1
-
Click on Release Change
-
Verify Source Stage
-
Verify Build Stage (Dev Environment - Dev Depploy phase)
-
Verify Manual Approval Stage - Approve the change
Let's go to email to approve
Let's consult email
Review
Approval Stage of pipeline succeeded
- Verify Stage Deploy Stage
Let's check log event.
Let's verify ressoures
CodeBuild created underline ressourrces for VPC.
Let's verify codebuild log
It shows build and post build state succeeded
- Verify build logs
The entire ressources are completed
Download phase complete
Intall and pre build phases completed
Initializing the backend...
Successfully configured the backend "s3"! Terraform will automatically use this backend.
" terraform.tfstate" is created in S3
Terraform has been successfully initialized.
Running command terraform validate, terraform plan, terraform apply
terraform plan completed
terraform apply completed
All ressources are completed in Stage environment
Pipeline looking
Source and build phases succeeded
Email-Approval and Stage-Deploy phases succeeded
- Confirm SNS Subscription in your email
Notification by email
Confirmation subscription
Let's verify SNS Topic
- Verify EC2 Instances
"stag-BastionHost" and and 2 instances "hr-stag" are created and running
- Verify Launch Templates (High Level)
It's created the lunch template name "hr-stag-2024022810181677970000000c" because we added "prefix"+local.name
- Verify Autoscaling Group (High Level)
It's created the Autoscaling Group name "hr-stag-2024022810181768120000000e" because we added "prefix"+local.name
let's verify Target tracking policy
- Verify Load Balancer
Load Balancer name "hr-stag" created with listeners
Let's check certificate created
- Verify Load Balancer Target Group - Health Checks
Target groups are Healthy because the instance running in availabities zones are Healthy.
- Access and Test
# Access and Test
https://stagedemo5.kalyandemo.com
https://stagedemo5.kalyandemo.com/app1/index.html
https://stagedemo5.kalyandemo.com/app1/metadata.htmlAccess and Test "https://stagedemo5.kalyandemo.com"
Let's first verify Route 53. The record name "stagdemo5.kalyandemo.com" created
Copy this link "stagdemo5.kalyandemo.com" and paste url
We obtain https connexion secure
Click on to view the certificate complete information
Access and Test "https://stagedemo5.kalyandemo.com/app1/index.html"
The test is valid
Access and Test "https://stagedemo5.kalyandemo.com/app1/metadata.html"
Matadata informations are completed. Two instances related private ip are refreshed. The load balancer is working
The first availably zone
The second availably zone
- Increase minimum EC2 Instances from 2 to 3
# Before
desired_capacity = 2
max_size = 10
min_size = 2
# After
desired_capacity = 4
max_size = 10
min_size = 4# Verify Changes
git status
# Commit Changes to Local Repository
git add .
git commit -am "ASG Min Size from 2 to 4"
# Push changes to Remote Repository
git push- Go to Services -> CodePipeline -> tf-iacdevops-aws-cp1
Source and build phases succeeded
- Verify Dev Deploy Logs
Autosclaling group modified in dev and increase capacity
- Approve at
Manual Approvalstage
Email-Approval and Stage-Deploy phases succeeded
- Verify Stage Deploy Logs
Autosclaling group modified in stage environement 2 ---> 4
Deployment complete succeeded
-
Go to Services -> EC2 Instances
-
Newly created instances should be visible.
-
hr-dev: 4 EC2 Instances
Let's verify instances
The number of instance increase to 4.
Let's check the target groups
- hr-stag: 4 EC2 Instances
The number of instance increase to 4.
Let's check the target groups
let's verify autoscaling group
let's verify autoscaling load balancer
# Before
TF_COMMAND: "apply"
#TF_COMMAND: "destroy"
# After
#TF_COMMAND: "apply"
TF_COMMAND: "destroy" # Before
TF_COMMAND: "apply"
#TF_COMMAND: "destroy"
# After
#TF_COMMAND: "apply"
TF_COMMAND: "destroy" # Verify Changes
git status
# Commit Changes to Local Repository
git add .
git commit -am "Destroy Resources"
# Push changes to Remote Repository
git push-
Go to Services -> CodePipeline -> tf-iacdevops-aws-cp1
-
Verify Dev Deploy Logs
All ressouces are destroying
Let's verivy pipeline
- Approve at
Manual Approvalstage
Edit pipeline in approval stage
Change SNS Topic
Release pipeline
Reveiw Email-Approval stage and approve
All ressources in Stage-Deploy are destroyed
- Verify Stage Deploy Logs
- Pipeline looking
All phases completed
- Let's check ressources
All EC2 are terminated
- Change them back to original state
# Before
desired_capacity = 4
max_size = 10
min_size = 4
# After
desired_capacity = 2
max_size = 10
min_size = 2- Change them back to original state
# Before
#TF_COMMAND: "apply"
TF_COMMAND: "destroy"
# After
TF_COMMAND: "apply"
#TF_COMMAND: "destroy" # Verify Changes
git status
# Commit Changes to Local Repository
git add .
git commit -am "Fixed all the changes back to demo state"
# Push changes to Remote Repository
git push





































































































































































