Skip to content
Branch: master
Find file History
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Type Name Latest commit message Commit time
Failed to load latest commit information.

Deploy Kubernetes in AWS


Prerequisites for Kubernetes

Install the kubectl

Use the Kubernetes command-line tool, kubectl, to deploy and manage applications on Kubernetes. Using kubectl, you can inspect cluster resources; create, delete, and update components; and look at your new cluster and bring up example apps.

More information and systems in / Install and Set Up kubectl

kubectl setup on Linux
curl -LO$(curl -s
sudo install -m 755 -o root -g root kubectl /usr/local/bin
kubectl setup on Mac

Using Homebrew package manager.

brew install kubectl
kubectl setup on Windows
Install-Script -Name install-kubectl -Scope CurrentUser -Force

Managed Kubernetes with EKS

  • What is eks?

Amazon Elastic Container Service for Kubernetes (Amazon EKS) is a managed service that makes it easy for you to run Kubernetes on AWS without needing to stand up or maintain yo∂ur own Kubernetes control plane. Kubernetes is an open-source system for automating the deployment, scaling, and management of containerized applications.

Amazon EKS runs Kubernetes control plane instances across multiple Availability Zones to ensure high availability. Amazon EKS automatically detects and replaces unhealthy control plane instances, and it provides automated version upgrades and patching for them.

Amazon Elastic Container Service for Kubernetes

Prerequisites for the EKS deploy

Update AWS CLI

Amazon EKS requires at least the version 1.15.32 of the AWS CLI.

pip install awscli --upgrade

Install heptio-authenticator

Amazon EKS clusters require kubectl and kubelet binaries and the Heptio Authenticator to allow IAM authentication for your Kubernetes cluster. Beginning with Kubernetes version 1.10, you can configure the stock kubectl client to work with Amazon EKS by installing the Heptio Authenticator and modifying your kubectl configuration file to use it for authentication.

More information at Configure kubectl for Amazon EKS.

Install heptio-authenticator on Linux
curl -sqo ./heptio-authenticator-aws
sudo install -m 775 -o root -g root heptio-authenticator-aws /usr/local/bin/
rm -v heptio-authenticator-aws
Install heptio-authenticator on Mac
curl -o heptio-authenticator-aws
sudo install -m 775 -o root -g admin heptio-authenticator-aws /usr/local/bin/
rm -v heptio-authenticator-aws

Deploy the EKS Cluster

Set region variable for AWS Cli

The default region can be set with aws configure. To be explicit for this lab, will be defined on each AWS CLI call.

export AWS_REGION='us-east-1'

Get the VPC information where the EKS will be deployed

export AWS_EKS_VPC=$(\
    aws --region ${AWS_REGION} ec2 describe-vpcs \
    --query 'Vpcs[?IsDefault].VpcId' \
    --output text)
    aws --region ${AWS_REGION} ec2 describe-subnets \
    --query "Subnets[?VpcId=='${AWS_EKS_VPC}'] | [?ends_with(AvailabilityZone,'b') || ends_with(AvailabilityZone,'a')].SubnetId" \
    --output text | sed "s/$(printf '\t')/,/g")
env | grep AWS_EKS_VPC

Retrieves the default VPC Id and saves it to the AWS_EKS_VPC environment variable. Then retrieves the the Ids of subnets for two zones (a, b) for that VPC as comma separated values. The list of subnets is stored at the AWS_EKS_VPC_SUBNETS_CSV environment variable.

Create EKS Security Group

Before you can create an Amazon EKS cluster, you must create an IAM role that Kubernetes can assume to create AWS resources. For example, when a load balancer is created, Kubernetes assumes the role to create an Elastic Load Balancing load balancer in your account. This only needs to be done one time and can be used for multiple EKS clusters.

export AWS_EKS_SG_NAME='AmazonEKSSecurityGroup'
export AWS_EKS_SG=$(\
    aws --region ${AWS_REGION} ec2 describe-security-groups \
    --group-name ${AWS_EKS_SG_NAME} \
    --query 'SecurityGroups[].GroupId' \
    --output text 2>/dev/null \
    || aws --region ${AWS_REGION} ec2 create-security-group \
    --group-name ${AWS_EKS_SG_NAME} \
    --description "EKS Security Group" \
    --vpc-id ${AWS_EKS_VPC} \
    --output text 2>/dev/null
env | grep AWS_EKS_SG

Set the Security Group name in the AWS_EKS_SG_NAME environment variable. Then retrieves the Security Group Id for the SG with that name or creates a new one. The Security Group Id is stored at AWS_EKS_SG environment variable.

Create EKS IAM Role

Amazon EKS makes calls to other AWS services on your behalf to manage the resources that you use with the service. Before you can use the service, you must have an IAM policy and role that provides the necessary permissions to Amazon EKS.

More information at / Amazon EKS Service IAM Role.

export AWS_EKS_ROLE_NAME='AmazonEKSServiceRole'
if ! aws iam get-role --role-name ${AWS_EKS_ROLE_NAME} 2>/dev/null; then
    aws iam create-role --role-name ${AWS_EKS_ROLE_NAME} --assume-role-policy-document file://eks/iam/AmazonEKSServiceRole.json
    aws iam attach-role-policy --role-name ${AWS_EKS_ROLE_NAME} --policy-arn arn:aws:iam::aws:policy/AmazonEKSServicePolicy
    aws iam attach-role-policy --role-name ${AWS_EKS_ROLE_NAME} --policy-arn arn:aws:iam::aws:policy/AmazonEKSClusterPolicy
export AWS_EKS_ROLE_ARN=$(aws iam get-role --role-name ${AWS_EKS_ROLE_NAME} --query 'Role.Arn' --output text)
env | grep AWS_EKS_ROLE

Set the Role Name to AWS_EKS_ROLE_NAME environment variable. Then checks if the role AmazonEKSServiceRole exists, if not, creates the role using eks/iam/AmazonEKSServiceRole.json and attaching the AmazonEKSServicePolicy and AmazonEKSClusterPolicy managed policies. The Role ARN is aftewards stored in AWS_EKS_ROLE_ARN environment variable.

More information at Amazon EKS Service IAM Role

Create the EKS cluster

Now you can create your Amazon EKS cluster.

When an Amazon EKS cluster is created, the IAM entity (user or role) that creates the cluster is added to the Kubernetes RBAC authorization table as the administrator. Initially, only that IAM user can make calls to the Kubernetes API server using kubectl. Also, the Heptio Authenticator uses the AWS SDK for Go to authenticate against your Amazon EKS cluster, you must ensure that the same IAM user credentials are in the AWS SDK credential chain when you are running kubectl commands on your cluster.

export AWS_EKS_NAME='bcncloud-eks'
aws --region ${AWS_REGION} eks create-cluster --name ${AWS_EKS_NAME} \
    --role-arn ${AWS_EKS_ROLE_ARN} \
    --resources-vpc-config subnetIds=${AWS_EKS_VPC_SUBNETS_CSV},securityGroupIds=${AWS_EKS_SG} \
    && while true; do aws --region ${AWS_REGION} eks describe-cluster --name ${AWS_EKS_NAME} --query cluster.endpoint | grep -vq 'null' && break || sleep 10; done;
aws --region ${AWS_REGION} eks describe-cluster --name ${AWS_EKS_NAME}

Set the cluster name and stores it in AWS_EKS_NAME environment variable. Creates the cluster with that name using the AWS CLI and all the resource ids obtained on the previous steps: AWS_EKS_ROLE_ARN, AWS_EKS_VPC_SUBNETS_CSV and AWS_EKS_SG. Then waits until the cluster endpoint is available and finally describes the EKS cluster.

Configure EKS Kubectl context

Install jq tool

jq is like sed for JSON data - you can use it to slice and filter and map and transform structured data.

More information and systems in / Install and Set Up jq,

Install jq on Linux
curl -OL
sudo install -m 755 -o root -g root kubectl /usr/local/bin
rm -v jq-linux64
Install jq on Mac
brew install jq
Install jq on Windows
chocolatey install jq

Install yq tool

Install yq, like jq but for yml files and requires jq. It will be used to create the kubeconfig file for the EKS cluster.

sudo pip install yq

Get the cluster information

To create the Kubeconfig file we need the Kubernetes API endpoint and the Certificate Authority data.

export AWS_EKS_ENDPOINT=$(aws --region ${AWS_REGION} eks describe-cluster --name ${AWS_EKS_NAME} --query cluster.endpoint --output text)
export AWS_EKS_CERTAUTHDATA=$(aws --region ${AWS_REGION} eks describe-cluster --name ${AWS_EKS_NAME} --query --output text)
env | grep AWS_EKS

Gets the Kubernetes endpoint and Certificate Authority data from the EKS resource using the CLI. The values are stored at AWS_EKS_ENDPOINT and AWS_EKS_CERTAUTHDATA environment variables.

More information at / Configure Access to Multiple Clusters.

Create the kube config file for the EKS cluster

mkdir -p  ~/.kube/eks
yq ".clusters[].cluster.server |= \"${AWS_EKS_ENDPOINT}\" | 
    .clusters[].cluster[\"certificate-authority-data\"] |= \"${AWS_EKS_CERTAUTHDATA}\" |
    .contexts[].name |= \"${KUBECTL_EKS_CONTEXT}\"" \
    eks/manifests/eks-kubeconfig.yaml --yaml-output | \
    sed "s/<cluster-name>/${AWS_EKS_NAME}/g" > ${KUBECTL_EKS_CONTEXT_FILE}

# Set the KUBECONFIG env var, add the new KUBECTL_EKS_CONTEXT_FILE just once if needed
[[ -z "${KUBECONFIG}" ]] \
    && export KUBECONFIG=~/.kube/config:${KUBECTL_EKS_CONTEXT_FILE} \

Using yq, populates the kubeconfig file with the real information from the EKS cluster, and defines a new context KUBECTL_EKS_CONTEXT stored in the KUBECTL_EKS_CONTEXT_FILE file. Then updates the KUBECONFIG environment variable to allow kubectl use the new config file.

More info at Organizing Cluster Access Using kubeconfig Files

Set kubectl context to ${KUBECTL_EKS_CONTEXT}

kubectl config use-context ${KUBECTL_EKS_CONTEXT}

Check that is working properly

kubectl get all

Add some EKS workers nodes

Choose an Amazon EKS-optimized AMI IDs

Region Amazon EKS-optimized AMI ID
US West (Oregon) (us-west-2) ami-73a6e20b
US East (N. Virginia) (us-east-1) ami-dea4d5a1
  • Create the SSH Public Key for the workers ssh user
export AWS_EKS_WORKERS_KEY="EKS-${AWS_EKS_NAME}-ec2-key-pair"
aws --region ${AWS_REGION} ec2 create-key-pair --key-name ${AWS_EKS_WORKERS_KEY} \
    --query KeyMaterial --output text > ~/.ssh/eksctl_rsa

Deploy EKS a workers stack

export AWS_EKS_WORKERS_TYPE="t2.small"
export AWS_EKS_WORKERS_AMI="$([[ ${AWS_REGION} == 'us-east-1' ]] && echo ami-dea4d5a1 || echo ami-73a6e20b)";
env | grep AWS_EKS_WORKERS

aws --region ${AWS_REGION} cloudformation create-stack \
    --stack-name  ${AWS_EKS_WORKERS_STACK} \
    --capabilities CAPABILITY_IAM \
    --template-body file://eks/cloudformation/eks-nodegroup-cf-stack.yaml \
    --parameters \
        ParameterKey=NodeGroupName,ParameterValue="${AWS_EKS_NAME}-workers" \
        ParameterKey=NodeAutoScalingGroupMinSize,ParameterValue="${AWS_EKS_WORKERS_MIN}" \
        ParameterKey=NodeAutoScalingGroupMaxSize,ParameterValue="${AWS_EKS_WORKERS_MAX}" \
        ParameterKey=NodeInstanceType,ParameterValue="${AWS_EKS_WORKERS_TYPE}" \
        ParameterKey=KeyName,ParameterValue="${AWS_EKS_WORKERS_KEY}" \
        ParameterKey=NodeImageId,ParameterValue="${AWS_EKS_WORKERS_AMI}" \
        ParameterKey=ClusterName,ParameterValue="${AWS_EKS_NAME}" \
        ParameterKey=ClusterControlPlaneSecurityGroup,ParameterValue="${AWS_EKS_SG}" \
        ParameterKey=VpcId,ParameterValue="${AWS_EKS_VPC}" \
        ParameterKey=Subnets,ParameterValue=\"${AWS_EKS_VPC_SUBNETS_CSV}\" &&
    aws --region ${AWS_REGION} cloudformation wait stack-create-complete \
        --stack-name  ${AWS_EKS_WORKERS_STACK}

Get Workers Instance Role

    aws --region ${AWS_REGION} cloudformation describe-stacks \
    --stack-name  ${AWS_EKS_WORKERS_STACK} \
    --query "Stacks[].Outputs[?OutputKey=='NodeInstanceRole'].OutputValue" \
    --output text)

Apply the AWS authenticator configuration map

sed "s@\(.*rolearn\):.*@\1: ${AWS_EKS_WORKERS_ROLE}@g" eks/manifests/k8s-aws-auth-cm.yaml > ${TMP_YML}
cat ${TMP_YML}
kubectl apply -f ${TMP_YML}
rm -v ${TMP_YML}
  • Check nodes
kubectl get nodes

Deploy Kubernetes Dashboard

  • Deploy the Kubernetes dashboard
kubectl apply -f
  • Deploy heapster to enable container cluster monitoring
kubectl apply -f
  • Deploy the influxdb backend for heapster
kubectl apply -f
  • Create the heapster cluster role binding for the dashboard
kubectl apply -f
kubectl apply -f eks/manifests/eks-admin-service-account.yaml
kubectl apply -f eks/manifests/eks-admin-binding-role.yaml
kubectl proxy

Deploy the demo-app

Apply the demo-app manifests

kubectl apply -f ../demo-app/k8s/

Expected Output:

deployment.apps/guestbook created
service/guestbook created
deployment.apps/redis-master created
service/redis-master created
deployment.apps/redis-slave created
service/redis-slave created

Get the demo-app pods

kubectl get pods

Expected output:

NAME                            READY     STATUS              RESTARTS   AGE
guestbook-574c46c86-4vvt9       1/1       Running             0          12s
guestbook-574c46c86-d7bnc       1/1       Running             0          12s
guestbook-574c46c86-qgj6g       1/1       Running             0          12s
redis-master-5d8b66464f-jphkc   0/1       ContainerCreating   0          11s
redis-slave-586b4c847c-gw2lq    0/1       ContainerCreating   0          10s
redis-slave-586b4c847c-m7nc8    0/1       ContainerCreating   0          10s

Get the demo-app services

kubectl get services

Expected output:

NAME           TYPE           CLUSTER-IP      EXTERNAL-IP                                                              PORT(S)        AGE
guestbook      LoadBalancer   80:31540/TCP   2m
kubernetes     ClusterIP      <none>                                                                   443/TCP        1h
redis-master   ClusterIP    <none>                                                                   6379/TCP       2m
redis-slave    ClusterIP   <none>                                                                   6379/TCP       2m

Cleanup EKS

Remove EKS resources and workers

aws --region ${AWS_REGION} cloudformation delete-stack --stack-name ${AWS_EKS_WORKERS_STACK}
aws --region ${AWS_REGION} ec2 delete-key-pair --key-name ${AWS_EKS_WORKERS_KEY}
aws --region ${AWS_REGION} eks delete-cluster --name ${AWS_EKS_NAME}
aws --region ${AWS_REGION} ec2 delete-security-group --group-id ${AWS_EKS_SG}
aws iam detach-role-policy --role-name ${AWS_EKS_ROLE_NAME} --policy-arn arn:aws:iam::aws:policy/AmazonEKSServicePolicy
aws iam detach-role-policy --role-name ${AWS_EKS_ROLE_NAME} --policy-arn arn:aws:iam::aws:policy/AmazonEKSClusterPolicy
aws iam delete-role --role-name ${AWS_EKS_ROLE_NAME}

Remove Load Balancers created by EKS

The Load Balancers created are not removed, the following snippet will provide a list of commands to remove the load balancers.

For security reasons, as we don't query specific resource IDs instead we use known Tags generated by EKS, the snippet doesn't deleted them automatically and requires manual verification. Check the ELBs before executing the delete commands.

    aws --region ${AWS_REGION} elb describe-load-balancers \
        --query 'LoadBalancerDescriptions[].LoadBalancerName' \
        --output text)
for ELB in $AWS_ELBS; do
        aws --region ${AWS_REGION} elb describe-tags \
            --load-balancer-names ${ELB} \
            --query "TagDescriptions[].Tags[?Key=='${AWS_EKS_NAME}'].Value" \
            --output text)
    if [[ "${AWS_EKS_ELB_TAG}" == "owned" ]];
        echo "# ${ELB} seems to be owned by the EKS cluster, to remove it execute:"
        ELB_SG=$(aws --region ${AWS_REGION} elb describe-load-balancers \
            --query 'LoadBalancerDescriptions[].SourceSecurityGroup.GroupName' \
            --output text)
        echo "aws --region ${AWS_REGION} elb delete-load-balancer --load-balancer-name ${ELB}"
        echo "aws --region ${AWS_REGION} ec2 delete-security-group --group-name ${ELB_SG}"

Gets a list with all the ELBs from the EKS region. For each one, checks if they contain a tag with${AWS_EKS_NAME}as Key and owned as Value. In that case, list the awscli commands to remove the ELB and the ELB SG.

Expected output:

# a7e0fda097d5811e89be902c32f5cb30 seems to be owned by the EKS cluster, to remove it execute:
aws --region us-west-2 elb delete-load-balancer --load-balancer-name a7e0fda097d5811e89be902c32f5cb30
aws --region us-west-2 ec2 delete-security-group --group-name k8s-elb-a7e0fda097d5811e89be902c32f5cb30

Amazon Web Services - eksctl (alpha)

  • What is eksctl?

eksctl is a simple CLI tool for creating clusters on EKS - Amazon's new managed Kubernetes service for EC2. It is written in Go, and based on Amazon's official CloudFormation templates.

You can create a cluster in minutes with just one command – eksctl create cluster!

eksctl - a CLI for Amazon EKS

Prerequisites for eksctl

curl --silent --location "$(uname -s)_amd64.tar.gz" | tar xz -C /tmp
sudo install -m 755 -o root /tmp/eksctl /usr/local/bin

Deploy eksctl cluster

Set region variable for eksctl

export AWS_EKSCTL_REGION='us-west-2'

Create the SSH Public Key for the eksctl admin user

aws ec2 create-key-pair --key-name EKS-eksctl-key --region ${AWS_EKSCTL_REGION} --query KeyMaterial --output text > ~/.ssh/eksctl_rsa

Deploy the cluster

eksctl create cluster \
    --cluster-name eksctl \
    --region ${AWS_EKSCTL_REGION} \
    --nodes-min 1 \
    --nodes-max 3 \
    --node-type t2.micro \
    --auto-kubeconfig \
    --ssh-public-key EKS-eksctl-key --verbose 4

eksctl cleanup

aws --region ${AWS_EKSCTL_REGION} ec2 delete-key-pair --key-name EKS-eksctl-key
eksctl delete cluster --cluster-name eksctl --region ${AWS_EKSCTL_REGION}

Amazon Web Services - kops

  • What is kops?

We like to think of it as kubectl for clusters.

kops helps you create, destroy, upgrade and maintain production-grade, highly available, Kubernetes clusters from the command line. AWS (Amazon Web Services) is currently officially supported, with GCE in beta support , and VMware vSphere in alpha, and other platforms planned.

Kubernetes Operations (kops) - Production Grade K8s Installation, Upgrades, and Management

Prerequisites for KOPS

  • Linux setup
wget -O kops$(curl -s | grep tag_name | cut -d '"' -f 4)/kops-linux-amd64
chmod +x ./kops
sudo mv ./kops /usr/local/bin/

More information and systems at kops

Deploy KOPS cluster

Create State Bucket

aws s3api create-bucket \
    --bucket \
    --region us-east-1
  • Export KOPS_STATE_STORE env var to avoid having to pass the state param every time
export KOPS_STATE_STORE=s3://

Create the SSH Public Key for the kops ssh admin user

ssh-keygen -t rsa -N '' -b 4086 -C ' ssh key pair' -f ~/.ssh/kops_rsa

Create the cluster

kops create cluster \
    --name \
    --master-size t2.micro \
    --master-count 3 \
    --master-zones eu-west-1a,eu-west-1b \
    --node-count 3 \
    --node-size t2.micro \
    --zones eu-west-1a,eu-west-1b \
    --state s3:// \
    --ssh-public-key ~/.ssh/ \
    --yes && \
    while true; do kops validate cluster && break || sleep 30; done;
You can’t perform that action at this time.