This repository contains a demo project created as part of my DevOps studies in the TechWorld with Nana – DevOps Bootcamp.
Demo Project: Create AWS EKS cluster with a Node Group
Technologies used: Kubernetes, AWS EKS
Project Description:
- Configure necessary IAM Roles
- Create VPC with CloudFormation Template for Worker Nodes
- Create EKS cluster (Control Plane Nodes)
- Create Node Group for Worker Nodes and attach to EKS cluster
- Configure Auto-Scaling of worker nodes
- Deploy a sample application to EKS cluster
- Navigate to
IAM->Roles->Create role - Select Trusted entity type:
AWS Service - Select Use case:
EKS Cluster - Set Role name:
eks-cluster-role - Create the role
-
Navigate to
CloudFormation->Create Stack -
Reference the AWS documentation for creating a VPC for EKS
-
Paste the following URL into the text area under Amazon S3 URL and choose Next:
https://s3.us-west-2.amazonaws.com/amazon-eks/cloudformation/2020-10-29/amazon-eks-vpc-private-subnets.yaml -
Set Stack name:
eks-worker-node-vpc-stack -
Use the default configs in the next steps: Next -> Next -> Submit
-
Wait until all 22 VPC resources are created
-
Navigate to
Amazon Elastic Kubernetes Service->Create cluster -
Select
Custom configuration -
Disable
EKS Auto Mode -
Configure the following settings:
Setting Value Name eks-clusterCluster IAM role eks-cluster-roleCluster authentication mode EKS API and ConfigMapLeave the remaining settings as default
-
Click Next
-
Select the newly created VPC and the associated security groups
-
Set Cluster endpoint access:
Public and private -
Click Next
-
Skip
Configure observability-> Click Next -
Skip
Select add-ons-> Click Next -
Skip
Configure selected add-ons settings-> Click Next -
Review and create
aws eks update-kubeconfig --name eks-cluster --profile admin
# Verify access
kubectl get ns
kubectl cluster-info-
Navigate to
IAM->Roles->Create role -
Select Trusted entity type:
AWS Service -
Select Use case:
EC2 -
Click Next and add the following permissions:
AmazonEKSWorkerNodePolicyAmazonEC2ContainerRegistryReadOnlyAmazonEKS_CNI_Policy
-
Click Next
-
Set Role name:
eks-node-group-role -
Create the role
-
Navigate to
eks-cluster->Compute->Add node group -
Set Name:
eks-node-group -
Set Node IAM role:
eks-node-group-role -
Click Next and set Instance types:
t3.small -
Click Next -> Specify networking (leave subnet defaults)
-
Check
Configure remote access to nodes- Choose an existing EC2 key pair or create a new one
- Set Allow remote access from:
All
-
Click Next -> Create
-
Navigate to
EC2to verify the newly launched instances -
Verify the nodes are registered:
kubectl get nodes
AWS does not automatically autoscale resources -- you need to configure the Cluster Autoscaler.
The Auto Scaling Group is created automatically with the Node Group.
- Navigate to
IAM->Policies->Create policy - Paste the JSON content from the custom-policy.json file
- Click Next
- Set Policy name:
ClusterAutoscalerPolicy - Create the policy
-
Go back to EKS cluster
eks-cluster -
Copy the OpenID Connect provider URL
-
Navigate to
IAM->Identity Providers->Add providerSetting Value Provider type OpenID ConnectProvider URL Paste your cluster's OpenID provider URL Audience sts.amazonaws.com -
Click Add provider
-
Navigate to
IAM->Roles->Create role -
Select Trusted entity type:
Web identity -
Select the newly created Identity provider
-
Set Audience:
sts.amazonaws.com -
Click Next and attach Permissions policies:
ClusterAutoscalerPolicy -
Click Next
-
Set Role name:
EKSServiceAccountRole -
Create the role
-
Navigate to
EC2->Auto Scaling groups->Tagsto verify the tags (automatically created by AWS)
The base cluster autoscaler YAML file can be found here: https://raw.githubusercontent.com/kubernetes/autoscaler/master/cluster-autoscaler/cloudprovider/aws/examples/cluster-autoscaler-autodiscover.yaml
Customize the manifest with the following changes:
-
Navigate to
IAM->Roles->EKSServiceAccountRoleand copy the ARN -
Add the annotation to the ServiceAccount:
annotations: eks.amazonaws.com/role-arn: arn:aws:iam::ACCOUNT_ID:role/EKSServiceAccountRole
-
Go to
eks-clusterand note the Kubernetes version -
Find the matching Cluster Autoscaler image tag:
- Go to the Autoscaler releases
- Find a tag that matches your Kubernetes version
-
Get your AWS region:
aws configure list --profile admin
-
Apply the following additions to the Deployment:
# Pod annotation cluster-autoscaler.kubernetes.io/safe-to-evict: "false" # Container environment variable env: - name: AWS_REGION value: "YOUR_AWS_REGION" # Container command arguments - --node-group-auto-discovery=asg:tag=k8s.io/cluster-autoscaler/enabled,k8s.io/cluster-autoscaler/eks-cluster - --balance-similar-node-groups - --skip-nodes-with-system-pods=false # Container image (replace with your matching version) image: registry.k8s.io/autoscaling/cluster-autoscaler:v1.XX.X
See the final manifest: cluster-autoscaler-autodiscover.yaml
-
Apply the configuration:
kubectl apply -f cluster-autoscaler-autodiscover.yaml
See logs example: as-logs.txt
-
Go to Node Group
eks-node-group-> Edit -
Set the following values:
Setting Value Desired size 1Minimum size 1Maximum size 3
See the application manifest: nginx.yaml
AWS automatically provisions a cloud-native Load Balancer for Kubernetes
LoadBalancerservices.
-
Deploy the application:
kubectl apply -f nginx.yaml
-
Verify the deployment:
kubectl get pod kubectl get svc
-
Copy the
EXTERNAL-IPfrom the service output and open it in the browser
Scale up -- increase the replica count to trigger the autoscaler:
kubectl edit deploy nginxSet replicas: 20
Scale down -- return the replica count to normal:
kubectl edit deploy nginxSet replicas: 1
































