Skip to content

explicit-logic/eks-module-11.1

Repository files navigation

Module 11 - Kubernetes on AWS - EKS

This repository contains a demo project created as part of my DevOps studies in the TechWorld with Nana – DevOps Bootcamp.

Demo Project: Create AWS EKS cluster with a Node Group

Technologies used: Kubernetes, AWS EKS

Project Description:

  • Configure necessary IAM Roles
  • Create VPC with CloudFormation Template for Worker Nodes
  • Create EKS cluster (Control Plane Nodes)
  • Create Node Group for Worker Nodes and attach to EKS cluster
  • Configure Auto-Scaling of worker nodes
  • Deploy a sample application to EKS cluster

Steps overview


Step 1: Configure Necessary IAM Roles

  1. Navigate to IAM -> Roles -> Create role
  2. Select Trusted entity type: AWS Service
  3. Select Use case: EKS Cluster
  4. Set Role name: eks-cluster-role
  5. Create the role

EKS Cluster Role


Step 2: Create VPC with CloudFormation Template for Worker Nodes

  1. Navigate to CloudFormation -> Create Stack

  2. Reference the AWS documentation for creating a VPC for EKS

  3. Paste the following URL into the text area under Amazon S3 URL and choose Next:

    https://s3.us-west-2.amazonaws.com/amazon-eks/cloudformation/2020-10-29/amazon-eks-vpc-private-subnets.yaml
    

    Create Stack

  4. Set Stack name: eks-worker-node-vpc-stack

  5. Use the default configs in the next steps: Next -> Next -> Submit

  6. Wait until all 22 VPC resources are created

VPC Stack Resources


Step 3: Create EKS Cluster (Control Plane Nodes)

3.1 Create the cluster

  1. Navigate to Amazon Elastic Kubernetes Service -> Create cluster

  2. Select Custom configuration

  3. Disable EKS Auto Mode

  4. Configure the following settings:

    Setting Value
    Name eks-cluster
    Cluster IAM role eks-cluster-role
    Cluster authentication mode EKS API and ConfigMap

    Leave the remaining settings as default

  5. Click Next

3.2 Configure networking

  1. Select the newly created VPC and the associated security groups

    EKS Cluster Networking

  2. Set Cluster endpoint access: Public and private

    Cluster Endpoint Access

  3. Click Next

  4. Skip Configure observability -> Click Next

  5. Skip Select add-ons -> Click Next

  6. Skip Configure selected add-ons settings -> Click Next

  7. Review and create

EKS Cluster Created

3.3 Connect to the cluster

aws eks update-kubeconfig --name eks-cluster --profile admin

# Verify access
kubectl get ns
kubectl cluster-info

Update Kubeconfig


Step 4: Create Node Group for Worker Nodes

4.1 Create the Node Group IAM role

  1. Navigate to IAM -> Roles -> Create role

  2. Select Trusted entity type: AWS Service

  3. Select Use case: EC2

    Node Group Role

  4. Click Next and add the following permissions:

    • AmazonEKSWorkerNodePolicy
    • AmazonEC2ContainerRegistryReadOnly
    • AmazonEKS_CNI_Policy
  5. Click Next

  6. Set Role name: eks-node-group-role

  7. Create the role

4.2 Add the Node Group to the EKS cluster

  1. Navigate to eks-cluster -> Compute -> Add node group

  2. Set Name: eks-node-group

  3. Set Node IAM role: eks-node-group-role

    Create Node Group

  4. Click Next and set Instance types: t3.small

    Node Group Compute

  5. Click Next -> Specify networking (leave subnet defaults)

  6. Check Configure remote access to nodes

    • Choose an existing EC2 key pair or create a new one
    • Set Allow remote access from: All

    Remote Access

  7. Click Next -> Create

  8. Navigate to EC2 to verify the newly launched instances

    EC2 Instances

  9. Verify the nodes are registered:

    kubectl get nodes

    Kubectl Get Nodes


Step 5: Configure Auto-Scaling of Worker Nodes

AWS does not automatically autoscale resources -- you need to configure the Cluster Autoscaler.

Cluster Autoscaler Overview AWS Autoscaler Autoscaler Flow

5.1 Verify the Auto Scaling Group

The Auto Scaling Group is created automatically with the Node Group.

  1. Open the Node Group

  2. Go to Details -> note the Autoscaling group name

    Autoscaling Group Name

5.2 Create a custom IAM policy

  1. Navigate to IAM -> Policies -> Create policy
  2. Paste the JSON content from the custom-policy.json file
  3. Click Next
  4. Set Policy name: ClusterAutoscalerPolicy
  5. Create the policy

5.3 Configure the OIDC Identity Provider

  1. Go back to EKS cluster eks-cluster

  2. Copy the OpenID Connect provider URL

    OpenID URL

  3. Navigate to IAM -> Identity Providers -> Add provider

    Setting Value
    Provider type OpenID Connect
    Provider URL Paste your cluster's OpenID provider URL
    Audience sts.amazonaws.com

    Identity Provider

  4. Click Add provider

5.4 Create the EKS Service Account role

  1. Navigate to IAM -> Roles -> Create role

  2. Select Trusted entity type: Web identity

  3. Select the newly created Identity provider

  4. Set Audience: sts.amazonaws.com

    Web Identity Role

  5. Click Next and attach Permissions policies: ClusterAutoscalerPolicy

  6. Click Next

  7. Set Role name: EKSServiceAccountRole

  8. Create the role

  9. Navigate to EC2 -> Auto Scaling groups -> Tags to verify the tags (automatically created by AWS)

    Autoscaling Group Tags

5.5 Deploy the Kubernetes Cluster Autoscaler

The base cluster autoscaler YAML file can be found here: https://raw.githubusercontent.com/kubernetes/autoscaler/master/cluster-autoscaler/cloudprovider/aws/examples/cluster-autoscaler-autodiscover.yaml

Customize the manifest with the following changes:

  1. Navigate to IAM -> Roles -> EKSServiceAccountRole and copy the ARN

    EKS Service Account Role ARN

  2. Add the annotation to the ServiceAccount:

    annotations:
      eks.amazonaws.com/role-arn: arn:aws:iam::ACCOUNT_ID:role/EKSServiceAccountRole
  3. Go to eks-cluster and note the Kubernetes version

    Kubernetes Version

  4. Find the matching Cluster Autoscaler image tag:

    Cluster Autoscaler Tag

  5. Get your AWS region:

    aws configure list --profile admin
  6. Apply the following additions to the Deployment:

    # Pod annotation
    cluster-autoscaler.kubernetes.io/safe-to-evict: "false"
    
    # Container environment variable
    env:
      - name: AWS_REGION
        value: "YOUR_AWS_REGION"
    
    # Container command arguments
    - --node-group-auto-discovery=asg:tag=k8s.io/cluster-autoscaler/enabled,k8s.io/cluster-autoscaler/eks-cluster
    - --balance-similar-node-groups
    - --skip-nodes-with-system-pods=false
    
    # Container image (replace with your matching version)
    image: registry.k8s.io/autoscaling/cluster-autoscaler:v1.XX.X

    See the final manifest: cluster-autoscaler-autodiscover.yaml

  7. Apply the configuration:

    kubectl apply -f cluster-autoscaler-autodiscover.yaml

    Deploy Autoscaler

    See logs example: as-logs.txt

5.6 Configure Node Group scaling limits

  1. Go to Node Group eks-node-group -> Edit

  2. Set the following values:

    Setting Value
    Desired size 1
    Minimum size 1
    Maximum size 3

    Edit Node Group

    Autoscaling Group


Step 6: Deploy a Sample Application to the EKS Cluster

See the application manifest: nginx.yaml

AWS automatically provisions a cloud-native Load Balancer for Kubernetes LoadBalancer services.

Cloud Native Load Balancer

  1. Deploy the application:

    kubectl apply -f nginx.yaml
  2. Verify the deployment:

    kubectl get pod
    kubectl get svc

    Deploy Nginx App

  3. Copy the EXTERNAL-IP from the service output and open it in the browser

    Open App

6.1 Test Auto-Scaling

Scale up -- increase the replica count to trigger the autoscaler:

kubectl edit deploy nginx

Set replicas: 20

Scale Up

Scale down -- return the replica count to normal:

kubectl edit deploy nginx

Set replicas: 1

Scale Down

About

Create AWS EKS cluster with a Node Group

Topics

Resources

Stars

Watchers

Forks

Contributors