Skip to content

rajinikanthe/Capstone-Solution

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

43 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Task 0: Environment Setup

0.1 Spin up an ec2 t2.micro with a security group that allows us to ssh into the machine

0.2 Create and attach IAM role with AdministratorAccess to the ec2

image

0.3 Install AWS CLI. Select Linux from the dropdown.

https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html

0.4 Install Terraform. Select Linux from the dropdown.

https://learn.hashicorp.com/tutorials/terraform/install-cli

0.5 Install Docker.

https://www.digitalocean.com/community/tutorials/how-to-install-and-use-docker-on-ubuntu-22-04

0.6 Install Eksctl. Select Linux from the dropdown.

https://docs.aws.amazon.com/eks/latest/userguide/eksctl.html

0.7 Install Kubectl. Select Linux from the dropdown.

https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/

Task 1: Setup EKS Cluster

1.1 Ensure aws-cli is installed and configured in your linux machine with full access to AWS by running below command,

aws s3 ls

1.2 Initialize a bucket in s3 for a backend state store using Terraform.

Go to S3. Click create bucket. Use a custom name image

No other changes in settings image

Create bucket. image

1.2 Create terraform folder and provider.tf file in that folder

terraform {
required_providers {
  aws = {
    source = "hashicorp/aws"
    version = "4.23.0"
  }
}
backend "s3" {
  bucket = "sk-capstone-tf"
  key    = "terraform.tfstate"
  region = "us-east-1"
  }
}

provider "aws" {
 # Configuration options
  region = "us-east-1"
}

Run below command to initiate terraform

terraform init

1.3 Create a file for VPC module vpc.tf

module "vpc" {
  source = "terraform-aws-modules/vpc/aws"

  name = "sk-capstone-vpc"
  cidr = "10.0.0.0/16"

  azs             = ["us-east-1a", "us-east-1b"]
  private_subnets = ["10.0.1.0/24", "10.0.2.0/24"]
  public_subnets  = ["10.0.101.0/24", "10.0.102.0/24"]

  enable_nat_gateway = true
  enable_vpn_gateway = true

  tags = {
    Terraform = "true"
    Environment = "dev"
  }
}

1.4 Run below command to check if there are any errors

terraform plan

1.4 Create a file outputs.tf to get the vpc and subnet IDs(you can get these IDs from console as well)

output "vpc_id" {
  description = "ID of project VPC"
  value       = module.vpc.vpc_id
}

output "privateSubnet1_id" {
  description = "ID of privateSubnet1"
  value       = module.vpc.private_subnets[0]
}
output "privateSubnet2_id" {
  description = "ID of privateSubnet2"
  value       = module.vpc.private_subnets[1]
}
output "publicSubnet1_id" {
  description = "ID of publicSubnet1"
  value       = module.vpc.public_subnets[0]
}
output "publicSubnet2_id" {
  description = "ID of publicSubnet2"
  value       = module.vpc.public_subnets[1]
}

1.5 Run below command to create VPC and to get the VPC and Subnet IDS

terraform apply

tf vpc module implementation

Created outputs for VPC and Subet IDs

1.6 Update the VPC and subnet IDs in eks-conf.yaml file from stub siles. Add the file in Cluster directory.

image

image

Run below command to create an eks cluster

eksctl create cluster -f sk-eks-config.yaml
kubectl get nodes

EKS cluster created and tested if nodes are ready

1.7 Install Kubectl metric server

kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml

Verify the installation

kubectl top nodes

Metric Server installed and tested

1.8 Add file autoscalar.yaml and apply it.

kubectl apply -f autoscalar.yaml

Autoscalar applied

Task 2: Deployment of sample application

2.1 Create an ECR repository to store the docker image of the node application.

image

No other change req image

image

2.2 Write a Dockerfile to dockerise the upg-loadme nodejs application in same folder where app files are located.

FROM node:12.18.1
LABEL maintainer="swapnilkhot36@gmail.com"
ENV NODE_ENV=production
WORKDIR /app
COPY ["package.json", "./"]
RUN npm install --production
COPY . .
EXPOSE 8081
CMD [ "node", "server.js" ]

2.3 Tag & push the docker image to the ECR repository.

image

Follow these steps image

2.4 Check if image is pushed to ecr

6  Image in ecr repo

2.5 Run the image using below command

docker images
docker run -itd -p 8081:8081 <image ID>

The image id should be displayed after docket images command. Make sure you are using the latest image ID.

2.6 Check if the application is online by hitting "ec2 ip:8081" in browser.

4  Docker Container is running and accessible from browser

2.7 Add a node group config to config file:

Use the last public node config and update taint config and a new name for the ng

  - name: pub-201-a-2
  labels: { role: workers }
  tags:
    k8s.io/cluster-autoscaler/enabled: "true"
    k8s.io/cluster-autoscaler/my-eks-201: "shared"
  taints:
  - key: "critical"
    value: "true"
    effect: NoSchedule
  instancesDistribution:
    instanceTypes:
      - t2.medium
  desiredCapacity: 0
  minSize: 0
  maxSize: 1
  subnets:
    - sk-capstone-cluster-pub-a

2.8 Create ng using below command

 eksctl create nodegroup --config-file=sk-eks-config.yaml

2.9 Create namespace "demo" using below command

kubectl create ns demo

image

2.10 Create yml for deployment. Make sure you update the image ID and toleration spec.

image

2.11 Create yml for ingress. Make sure you update spec as required

image

2.12 Apply theese yaml files with below commands

kubectl create -f upg-loadme.yaml --namespace=demo
kubectl create -f upg-loadme-ingress.yaml --namespace=demo

2.13 Ensure the pod is running

kubectl get pods --namespace=demo

image

2.14 You can check if the node is tainted as specified

kubectl describe node describe ip-10-0-101-198.ec2.internal

image

Task 3:Deploy Redis server on Kubernetes

3.1 Create redis.yaml

resources:
limits:
     cpu: "200m"
     memory: "200Mi"
auth:
     enabled: false

image

3.2 Install helm using below link

https://helm.sh/docs/intro/install/

3.3 Install redis using below commands

helm repo add bitnami https://charts.bitnami.com/bitnami
helm search repo redis
helm install my-release bitnami/redis -f redis.yaml -n demo

image

3.4 Confirm if pods are up and running

kubectl get pods -n demo

image

3.5 Create a redis pod

cluster$ kubectl run redis --image redis  -n demo

image

3.6 Get services in demo namespace

kubectl get services -n demo

image

3.7 Exec redis pod to ssh into it

kubectl run redis --image redis  -n demo

3.8 Run below command inside redis pod

redis-cli -h my-release-redis-master -p 6379

image

3.9 Create a variable inside cluster.

SET foo 1

image

3.10 Get pods list for the master redis pod name

kubectl get pods -n demo

image

3.11 Delete redis master pod

kubectl delete pod my-release-redis-master-0 -n demo

image

3.12 The pod will get recreated. Wait for the pod to get up and running.

image

3.12 SSH into the redis pod and confirm if the variable is still there

GET foo

image

Task 4: Test auto scaling of the application.

4.1 Create yml for hpa. Make sure you update spec as required

image

kubectl create -f upg-loadme-hpa.yaml --namespace=demo

image

4.2 Install prometheus usin helm

helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm repo update
helm install capstone-prom  prometheus-commun

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • HCL 58.2%
  • JavaScript 32.1%
  • Dockerfile 9.7%