Skip to content


Building sandbox environments with EKS Distro using the AWS CDK to enable application development for hybrid EKS environments

If you've worked extensively with kubernetes projects in the last few years, you may probably have come across a significant number of Kubernetes enterprise-grade and large-scale implementations that have required dealing with hybrid container solutions, often times with a different set of tooling and frameworks, and disparate environments with different APIs and services to build applications on-premises as in the cloud. As a result, you may also have seen customers spending time to re-architect applications for different environments rather than build their applications just once to run anywhere, using the same wide range of popular services and APIs used in the cloud for those applications that need to run on-premises as well.

EKS Distro coming to the rescue

By recognizing this trend, AWS launched on-premises container offerings such as EKS-D and EKS-A, which are helping those customers struggling to manage their hybrid kubernetes environments, but as they are relatively new deployment options, they might experience challenges while adopting them for software delivery. In addition, production deployments with either of them will most likely be targeted at on-premises environments such as VMware or bare metal servers, meaning that typical infrastructure as code software tools (e.g., Terraform, AWS CDK, etc.), that enables creation, changing, and improvement of cloud infrastructure automation, wouldn't be a fit to address this scenario. Nevertheless, EKS Distro is still a great option to enable application development for hybrid kubernetes environments, which in turn can be seamlessly automated by using software development frameworks to define cloud application resources needed to provision development and test environments on the AWS Cloud. Therefore, this approach allows the same experience for developers when creating, testing, and running their application regardless of the environment utilized, i.e., development, testing, acceptance, and production environments, in which they may deploy them. With this in mind, in this project you will learn how to accomplish this by using AWS CDK to automate building of EKS-D environments to streamline hybrid kubernetes application development based on EKS.

EKS Distro is an installable set of the same open source Kubernetes components used by Amazon EKS, tested for interoperability, security, and scale, which includes the same components that form the Kubernetes control plane(e.g. API Server, CoreDNS, etcd, scheduler, etc) and data plane(kubelet, CNI, kubectl, etc) that is freely available when public Kubernetes releases occur. We all know there is a set of related components that work together to form the larger Kubernetes platform. Some of these components, like etcd, and CoreDNS, are actually separate open source projects and must be tested for interoperability and version compatibility. In other words, AWS puts together a lot of work to select the right Kubernetes version, validate dependencies to provide the security, stability, and operational excellence to reduce the complexity of Kubernetes operations that our customers need for a smoother experience and faster project implementations.

EKS Anywhere provides cluster lifecycle management tooling that enables customers to install, upgrade, and manage EKS Distro based-clusters at scale for on-premises environments. The goal is to provide a managed EKS-like experience regardless of where Kubernetes workloads are running. To accomplish this, EKS Anywhere provides an opinionated bundle of tooling to assist with differences in how on-premises environments are configured as compared to cloud, which allows customers to deploy, manage, and operate Kubernetes environments on their own bare metal or virtualized environments based on EKS Distro.

In summary, the way to think about these 2 offerings is that EKS Distro gives customers kubernetes version consistency whereas EKS Anywhere gives them cluster operations lifecycle consistency with AWS.


  1. Andre Boaventura, AWS Principal Solutions Architect

A 10,000-foot view of the Hybrid-EKS development environment architecture

In this project, you'll get to know how to build and automate creation of development and prototype environments for hybrid software delivery using the AWS CDK to automate EKS Distro environment provisioning for development purposes, allowing a seamless experience while standing up and standardizing kubernetes environments and application deployment on top of EKS. It was designed to explore ways and best practices to abstract the challenges and complexities of deploying hybrid-EKS development infrastructure targeted at DevOps teams, allowing repeatable workload development and testing which can be easily integrated with existing CI/CD pipelines using a simple and consistent method, as needed. The approaches, which are thoroughly explored throughout this example, can then be used by developers across an enterprise and diverse teams in a repeatable and predictable way. For operations teams, they will not only simplify deployment of “well-architected” and hybrid-EKS development environments aiming the automation of software delivery using DevOps techniques and best practices with the AWS CDK, but also consolidate administration and monitoring of all EKS clusters, from a single pane of glass, by utilizing the EKS Console, through the integration with EKS Connector, as a central management platform.


  1. Getting our hands dirty with EKS Distro and AWS CDK
  2. Spinning up an EKS cluster on AWS Cloud
  3. Building and Deploying a REST API with Node.js, Express, and Amazon DocumentDB
  4. Monitoring EKS Distro by using EKS Connector (Optional)
  5. Walkthrough Demo

Hybrid EKS development environment architecture


1. Getting our hands dirty with EKS Distro and AWS CDK

That’s enough theory. Time to roll up our sleeves and get going with EKS Distro! In this section, we will deploy the following architecture with EKS Distro.




In this section, I will explain how to deploy EKS Distro using the AWS CDK in the AWS Cloud9 workspace:

a. Create and configure an AWS Cloud9 environment

b. Clone the Hybrid-EKS development environment sample code repository and install dependencies

c. Getting to know the EKS Distro CDK app

d. Set up AWS CDK in the AWS Cloud9 workspace

e. Changing AWS CDK context parameters before deploying stacks

f. Building and deploying the EKS Distro CDK app

g. Accessing the EKS Distro environment

h. Validating Cluster DNS Configuration (OPTIONAL)

i. Validating Cluster Deployment (OPTIONAL)

j. Scaling out your EKS Distro cluster (OPTIONAL)

a. Create and configure an AWS Cloud9 environment

Log in to the AWS Management Console and search for Cloud9 services in the search bar:


Click Cloud9 and create an AWS Cloud9 environment in any region of your preference. In this blog I’ll be using us-west-2 as my preferred region


Launch the AWS Cloud9 IDE and open a new terminal session. Before getting started with installation, you will need to properly grant permissions to create the EKS Distro cluster with kops as required. There are some ways to provide credentials to the Cloud9 environment to proceed with cluster deployment such as attaching an IAM instance profile to the EC2 instance or creating/modifying an IAM user with permanent AWS access credentials which should be stored within the environment using either aws configure or environment variables(AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY). Detailed instructions on how to create an IAM role for Cloud9 workspace can also be found here and how to attach it to the just created workspace here. Additionally, you must turn off the AWS managed temporary credentials of the Cloud9 environment as explained here. You may also need to resize the Amazon Elastic Block Store (Amazon EBS) volume that is associated with the Amazon EC2 instance for this AWS Cloud9 environment, which is documented here.

b. Clone the Hybrid-EKS development environment sample code repository and install dependencies

Let's get started by installing the dependencies to set up this environment. If needed, there are more details on how to configure kubectl here

## Installing kubectl (1.22)
curl -o kubectl
chmod +x ./kubectl
mkdir -p $HOME/bin && cp ./kubectl $HOME/bin/kubectl 
export PATH=$PATH:$HOME/bin && echo 'export PATH=$PATH:$HOME/bin' >> ~/.bashrc

sudo yum install jq -y
export AWS_DEFAULT_REGION=$(curl -s | jq -r '.region')

Clone the GitHub repository containing the code sample for this example:

git clone
export HOME_REPO=$HOME/environment/aws-hybrid-eksd-dev-sandbox

c. Getting to know the EKS Distro CDK app

This repository contains source code for an AWS CDK app consisting of 2 stacks(CdkRoute53Stack and CdkEksdistroStack, respectively), which are equivalent to AWS CloudFormation stacks, to automate EKS Distro deployment. They have been written in TypeScript and use AWS CDK to define the AWS infrastructure needed for standing up the EKS Distro cluster. In turn, stacks contain constructs, each of which defining one or more concrete AWS resources, such as EC2 instances, IAM roles, S3 buckets, and so on. In this example, you may find below each stack, with their respective AWS resources, initialized by some of the AWS CDK constructs.

Key AWS CDK constructs used in the CdkRoute53Stack (./cdk/cdk-eksdistro/lib/cdk-route53-stack.ts)

An AWS Route 53 parent domain and child domain
if (crossAccountRoute53) {
        const IsParentAccount = this.node.tryGetContext('IsParentAccount');

        if (IsParentAccount) {
            const childAccountId = this.node.tryGetContext('childAccountId'); 

            // Parent hosted zone is created. Child hosted zone will be exported into this record
            const parentZone = new route53.PublicHostedZone(this, 'HostedZone', {
                  zoneName: zoneName, // ''
                  crossAccountZoneDelegationPrincipal: new iam.AccountPrincipal(childAccountId),
                  crossAccountZoneDelegationRoleName: 'MyRoute53DelegationRole',
        } else {
          // Child hosted zone is created
          const subZone = new route53.PublicHostedZone(this, 'SubZone', {
              zoneName: subZoneName // E.g.: ''
          // import the delegation role by constructing the roleArn
          const parentAccountId = this.node.tryGetContext('parentAccountId');

          const delegationRoleArn = Stack.of(this).formatArn({
            region: '', // IAM is global in each partition
            service: 'iam',
            account: parentAccountId, 
            resource: 'role',
            resourceName: 'MyRoute53DelegationRole',
          const delegationRole = iam.Role.fromRoleArn(this, 'DelegationRole', delegationRoleArn);
          // Export the record under the parent Hosted Zone in a different AWS account
          new route53.CrossAccountZoneDelegationRecord(this, 'delegate', {
            delegatedZone: subZone,
            parentHostedZoneName: zoneName, // E.g.: '' or you can use parentHostedZoneId
    } else {
        // Child hosted zone is created
        new route53.PublicHostedZone(this, 'SubZone', {
            zoneName: subZoneName // E.g.: ''

Key AWS CDK constructs used in the CdkEksDistroStack (./cdk/cdk-eksdistro/lib/cdk-eksdistro-stack.ts)

A key pair to be assigned to the EC2 instance for accessing it from the AWS Cloud9 terminal
    const key = new KeyPair(this, 'KeyPair', {
       name: 'cdk-eksd-key-pair',
       description: 'Key Pair created with CDK Deployment',
A security group to allow inbound connection via SSH (port 22)
    const vpc = ec2.Vpc.fromLookup(this, 'DefaultVPC', { isDefault: true });
    const securityGroup = new ec2.SecurityGroup(this, 'SecurityGroup', {
      description: 'Allow SSH (TCP port 22) in',
      allowAllOutbound: true
    securityGroup.addIngressRule(ec2.Peer.anyIpv4(), ec2.Port.tcp(22), 'Allow SSH Access')
IAM role with the following required policies to install EKS-D with kops
    const role = new iam.Role(this, 'ec2-EKSD-Role', {
      assumedBy: new iam.ServicePrincipal('')
An EC2 instance using the default VPC with the required dependencies to install EKS Distro through kops. It uses the CloudFormation init `cfn-init` helper script as a way to execute init scripts once the EC2 instance boots to install packages.
    const ami = new ec2.AmazonLinuxImage({
      generation: ec2.AmazonLinuxGeneration.AMAZON_LINUX_2,
      cpuType: ec2.AmazonLinuxCpuType.X86_64

    // Create the instance using the Security Group, AMI, library dependencies
    // and KeyPair based on the default VPC
    const ec2Instance = new ec2.Instance(this, 'Instance', {
      instanceType: ec2.InstanceType.of(ec2.InstanceClass.T3, ec2.InstanceSize.LARGE),
      machineImage: ami,
      init: ec2.CloudFormationInit.fromElements(
          ec2.InitCommand.shellCommand('sudo yum update -y'),
          ec2.InitCommand.shellCommand('sudo yum install git -y'),
          ec2.InitCommand.shellCommand('sudo yum install jq -y')
      blockDevices: [
            deviceName: '/dev/xvda',
            volume: ec2.BlockDeviceVolume.ebs(50),
      securityGroup: securityGroup,
      keyName: key.keyPairName,
      role: role
Create an asset that will be used as part of User Data to run on first load
    const asset = new Asset(this, 'Asset', { path: path.join(__dirname, '../src/') });
    const localPath = ec2Instance.userData.addS3DownloadCommand({
      bucket: asset.bucket,
      bucketKey: asset.s3ObjectKey,

      filePath: localPath,
      arguments: '--verbose -y'

CdkEksDistro CDK app (./cdk/cdk-eksdistro/lib/cdk-eksdistro.ts)

This is the entrypoint of the EKS Distro CDK application. It will load the 2 stacks defined in ./cdk/cdk-eksdistro/lib/cdk-eksdistro-stack.ts and ./cdk/cdk-eksdistro/lib/cdk-route53-stack.ts. Please note that as we have stack dependency, since the EKS Distro may only be provisioned when a registered AWS Route 53 domain is configured, I added an ordering dependency between two stacks by using the stackA.addDependency(stackB) method to make sure the stack CdkEksDistroStack will only run after a successful completion of the CdkRoute53Stack stack.

Additionally, you can use either CDK_DEFAULT_ACCOUNT and CDK_DEFAULT_REGION or CDK_DEPLOY_ACCOUNT and CDK_DEPLOY_REGION to set the AWS region and AWS account where the 2 stacks will be deployed. The former is determined by the AWS CDK CLI at the time of synthesis, whereas the latter uses environment variables to let you override the account and region at synthesis time.

   const app = new cdk.App();

   // Route53 Stack - requirement for running the EKS Distro stack
   const stackRoute53 = new CdkRoute53Stack(app, 'CdkRoute53Stack');

   // EKS Distro Stack
   const stackEKSD = new CdkEksDistroStack(app, 'CdkEksDistroStack', { 
     env: { 
       account: process.env.CDK_DEPLOY_ACCOUNT || process.env.CDK_DEFAULT_ACCOUNT, 
       region: process.env.CDK_DEPLOY_REGION || process.env.CDK_DEFAULT_REGION 

d. Set up AWS CDK in the AWS Cloud9 workspace

First off, ensure AWS CDK is installed and bootstrapped. The CDK uses the same supporting infrastructure for all projects within a region, so you only need to run the bootstrap command once in any region in which you create CDK stacks. In this example, let us use us-west-2 as the preferred region. Also, npm install will install all the latest CDK modules under the node_modules directory according to the definitions and dependencies declared in the package.json file.

cd $HOME_REPO/cdk/cdk-eksdistro
npm install
cdk bootstrap

e. Changing AWS CDK app parameters before deploying stacks

User Data (REQUIRED)

The /src/ file is used as user-data by the EC2 instance spun up by the CdkEksDistroStack to do the following tasks:

  • install dependencies(e.g., git, jq, kubectl, kops)
  • add environment variables required by kops to the shell initialization file ~/.bashrc
  • clone the EKS Distro repository
  • create the cluster configuration
  • wait for the cluster to come up until deployment is finished

This script is located in your AWS Cloud9 environment at $HOME_REPO/cdk/cdk-eksdistro/src/

You should change the following environment variables to point to your environment configuration accordingly before deploying the CDK app.

All these changes are REQUIRED:


  • KOPS_CLUSTER_NAME is a valid subdomain controlled by AWS Route 53.
  • KOPS_STATE_STORE is the URL for the s3 bucket which will store kops configuration.
  • IAM_ARN is either an IAM user or role to view Kubernetes resources in Amazon EKS console, which in turn, will be associated with a Kubernetes role or clusterrole with necessary permissions to read EKS Distro resource. More information on granting access to a user to view Kubernetes resources on a cluster are available here

Environment variables changes (OPTIONAL)


  • RELEASE_BRANCH is the Kubernetes distribution used by EKS-D
  • RELEASE is the EKS-D release
  • EKSCONNECTOR_CLUSTER_NAME is the name used to register the EKS-D cluster with the EKS Console by using the EKS Connector.

More details on EKS-D releases can be found here.

AWS CDK context values (REQUIRED)

Context values are key-value pairs that can be associated with an AWS CDK app, stack, or construct, and they can be provided in different ways.

In this example, there are some key-values used to control the way application will deploy the AWS Route 53 parent and subdomain depending on whether or not a multi AWS account setup will be utilized to deploy the EKS Distro cluster through kOps. This AWS CDK configuration file is located in your AWS Cloud9 environment at $HOME_REPO/cdk/cdk-eksdistro/cdk.context.json

That said, you may indicate whether or not you will be deploying a parent hosted zone in a different account than the child hosted zone. If so, set "crossAccountRoute53": true and run AWS CDK app described in the section below twice:

  1. At first, on the parent account by changing the "IsParentAccount": true

  2. Then, on the child account by changing the "IsParentAccount": false

Also, remember to properly setup parentAccountId and childAccountId as needed.

Otherwise, in case you will be setting up everything under the same AWS account(the most common scenario), you'll basically need to run the AWS CDK app once, and set "crossAccountRoute53": false, which means parentAccountId and childAccountId won't be used.

Regardless the approach you'll be using for deploying the AWS Route 53 domains, remember to set both zoneName and subZoneName.

subZoneName should be set with the same domain as defined in the environment variable KOPS_CLUSTER_NAME at the user-data script /src/ file, as detailed in the previous section.

  "crossAccountRoute53": true,
  "IsParentAccount": false,
  "parentAccountId": "111222333444", 
  "childAccountId": "555666777888",  
  "zoneName": "",        
  "subZoneName": "" 

f. Building and deploying the EKS Distro CDK app

Deploying the EKS Distro CDK app is a straightforward task. You basically need to execute npm run build to compile typescript to js, and, then, execute cdk deploy --all to deploy both stacks to the configured AWS account and region, which may take about 5-6 minutes to complete.

npm run build
cdk deploy --all

Type Y to confirm both stacks deployment

CdkRoute53Stack CdkEksdistroStack

The syntax and additional details of AWS CDK commands are documented here

g. Accessing the EKS Distro environment


  • CdkEksDistroStack.DownloadKeyCommand: The command needed to download the private key that was created.
  • CdkEksDistroStack.EC2PublicIPaddress: The EC2 public IP address
  • CdkEksDistroStack.EKSDistrokubeconfigscpcommand: The command used to copy EKS Distro kubeconfig file to the AWS Cloud9 environment to be able connect to the cluster using kubectl from the AWS Cloud9 instance.
  • CdkEksDistroStack.sshcommand: The command used to connect to the instance.


Keys and Access

A Key Pair is created as part of this project. The public key will be installed as an authorized key in the EC2 instance. To connect to the instance:

  1. Download the private key from aws secretsmanager:

    # This will download the key as `cdk-eksd-key-pair.pem` and grant permissions.
    aws secretsmanager get-secret-value --secret-id ec2-ssh-key/cdk-eksd-key-pair/private --query SecretString --output text > cdk-eksd-key-pair.pem && chmod 400 cdk-eksd-key-pair.pem
  2. SSH to the instance using the command provided from the stack's output CdkEksDistroStack.sshcommand.

    For example:

    ssh -i cdk-eksd-key-pair.pem -o IdentitiesOnly=yes ec2-user@X.X.X.X

    Find the command for your specific instance in the stack's output.


Upon connection to the EC2 instance, you can check the cloud-init output log file /var/log/cloud-init-output.log as it captures console output which allows to easily monitor and/or debug the automated script /src/ while launching the EKS Distro cluster by using kOps. You may run the following command to monitor the EC2 instance initialization, and, thus, all details of EKS Distro cluster provisioning if needed:

    sudo tail -f /var/log/cloud-init-output.log


Note: If you come across any glitches while deploying the EKS Distro with a message similar to Error: error building complete spec: error reading tag file, it's likely that the EKS Distro chosen is unavailable or hasnt' passed the building process. As such, make sure you're using the lastest distribution as per instructions in EKS Distro github repo. If needed, change both RELEASE_BRANCH and RELEASE (as per instructions above) to point to the desired EKS distribution and redeploy the CdkEksDistro CDK app.

h. Validating Cluster DNS Configuration (OPTIONAL)

This example leveraged kOps for provisioning the required cloud infrastructure to successfully deploy the EKS Distro clustered environment. You can think of kops (aka Kubernetes Operations) as the Kubernetes CLI (kubectl) for clusters, allowing Kubernetes administrators to create, destroy, upgrade, and maintain production-grade and highly available clusters

kops uses DNS for discovery, both inside and outside the cluster, so that you can reach the kubernetes API server from clients. A top-level domain or a subdomain is required to create the cluster. This domain allows the worker nodes to discover the master, and the master to discover all the etcd servers. Also, this is needed for kubectl to be able to connect directly with the master node. This example used both zoneName and subZoneName context parameters of the CdkRoute53Stack app to set up AWS Route 53 as detailed in the previous section AWS CDK context values.


All traffic sent to subZoneName will be routed to the correct subdomain hosted zone in AWS Route 53. You can check your DNS configuration as follows:


;; ANSWER SECTION:    300    IN    NS    300    IN    NS    300    IN    NS    300    IN    NS

Alternatively, you can run the following command to check your NS records:

aws route53 list-resource-record-sets \
--output=table \
--hosted-zone-id `aws route53 --output=json list-hosted-zones | jq -r --arg SUBZONENAME "$KOPS_CLUSTER_NAME." '.HostedZones[] | select(.Name==$SUBZONENAME) | .Id' | cut -d/ -f3|cut -d\" -f1`

More details on how to Configure DNS for kops and Using kops with AWS Route53 subdomain can be helpful in case you need to troubleshoot your installation.

i. Validating cluster deployment (OPTIONAL)

EKS Distro installation comes with kops binaries and some scripts to streamline the deployment process. In turn, kOps is used to create and manage kubernetes clusters, including EKS Distro clusters, with multiple master and worker nodes distributed across multiple AZs for high availability purposes. Behind the scenes, it spins up EC2 instances, sets up security with IAM user and IAM roles and networking including VPC, subnets, routing tables, Route 53 NS records, security groups, and auto scaling groups, among other AWS resources to properly deploy EKS Distro on top of a highly scalable infrastructure as needed.

The automation script, /src/, has exported KOPS_STATE_STORE to the s3 bucket for EKS-D kops configuration. This S3 bucket was used to store both the state and representation of the EKS-D cluster. When kops state store does not exist, the cluster configuration script creates one for you. You must have set KOPS_CLUSTER_NAME to the same valid subdomain controlled by AWS Route 53 configured in the previous section while setting up the subZoneName parameter in the ./cdk/cdk-eksdistro/cdk.context.json file.

As such, you may find that this script runs ./ to both create the configuration and required cloud resources for the EKS Distro Cluster:

Here is a sample output for REFERENCE ONLY as this has already been automated for you:


Wait for the EKS-D cluster to come up

It may take a while (typically 15 min) for the cluster to be up and running, but you can check your cluster with the following command until it gets ready for usage:

cd ~/eks-distro/development/kops 

Upon completion, you may confirm whether the pods in your cluster are using the EKS Distro images by running the following command:

kubectl --context $KOPS_CLUSTER_NAME get pod --all-namespaces -o json | jq -r '.items[].spec.containers[].image' | sort -u


By default, you could realize that this configuration stands up a 1-master and 3-worker nodes kubernetes cluster with high availability provided by a kops configuration called instance groups, which is a set of instances representing a group of similar machines typically provisioned in the same availability zone and purposely grouped together(e.g. control-plane-us-west-2a=master node and nodes=worker nodes as shown in the figure below).

On AWS, instance groups are implemented via Automatic Scaling Groups(ASG), allowing administrators to configure several instances groups, for example splitting worker nodes and master nodes, defining a mix of spot and on-demand instances, or GPU and non-GPU instances as needed. Thus, each instance in the cluster will be automatically monitored and rebuilt by AWS if it suffers any failure. The default configuration has one ASG for master nodes and another one for worker nodes, with 1 and 3 nodes, respectively. However, you’ll learn how to adjust these configurations to scale out your EKS Distro cluster wit kops in the next steps in this section.


Note that kops just created a DNS record for the kubernetes API: You can check this record with the following dig command.

dig +short api.$KOPS_CLUSTER_NAME A


or through the AWS Console as follows


In the example above, the record name has the public IP assigned to the master node. Alternatively, we can send a command to our master node via ssh which will output the same IP address for this record on the picture above, indicating that the Route 53 DNS resource record and the master node are properly working:

ssh -i ~/.ssh/id_rsa ubuntu@api.$KOPS_CLUSTER_NAME "ec2metadata --public-ipv4"


j. Scaling out your EKS Distro cluster (OPTIONAL)

Let’s say workload has significantly increased and our cluster needs 3 additional worker nodes to serve more requests. We can do that by changing the instance group (mapped as an ASG) as explained above. First, let’s pull the instance group names from our cluster:

kops get instancegroups

NAME                            ROLE    MACHINETYPE     MIN     MAX     ZONES
control-plane-us-west-2a        Master  t3.medium       1       1       us-west-2a
nodes                           Node    t3.medium       3       3       us-west-2a,us-west-2b,us-west-2c

Then, let’s edit the instance group with the following command:

kops edit ig nodes


As you can see in the configuration above, you can change parameters like IAM role, availability zone, machine type, etc. As our goal is to scale out, change both minSize and maxSize to 6 and save the new configuration, as this represents the number of worker nodes in our kubernetes clusters after updating configuration.

High availability is implemented by AWS auto scaling group to ensure cluster will scale out as needed. The Auto Scaling group “node” starts by launching enough instances to meet its desired capacity as per instance group configuration and maintains this number of instances by performing periodic health checks on the instances in the group. Also, it continues to maintain a fixed number of instances even if an instance becomes unhealthy. If an instance becomes unhealthy, the group terminates the unhealthy instance and launches another instance to replace it.

kops update cluster --name $KOPS_CLUSTER_NAME


After reviewing the new instance group configuration, run the same command again, but this time specifying --yes to apply the changes into the environment:

kops update cluster --name $KOPS_CLUSTER_NAME --yes


Alternatively, we could also change ASG via AWS CLI or even the AWS Console. See below how to do it with AWS CLI.

aws autoscaling update-auto-scaling-group \
    --auto-scaling-group-name nodes.${KOPS_CLUSTER_NAME} \
    --min-size 6 \
    --max-size 6 \
    --desired-capacity 6

Note: The downside of this approach is that your cluster configuration won’t be in-sync with the AWS ASG configuration regarding the number of worker nodes.

While the changes are applied so that the new nodes can join the cluster, it’s time to do something else for about 5 minutes. If you run kops validate cluster, you will get the following status message:

KIND    NAME                    MESSAGE
Machine i-05f2f99e639e89b7a     machine "i-05f2f99e639e89b7a" has not yet joined cluster
Machine i-0c69b4620646b4a8b     machine "i-0c69b4620646b4a8b" has not yet joined cluster
Machine i-0e169d852e022cc34     machine "i-0e169d852e022cc34" has not yet joined cluster

After a while, you can check your cluster again to make sure all worker nodes has joined the cluster as expected. You can use either kops validate cluster on your terminal


or the AWS Console as shown below:


As you can see above, our EKS Distro cluster has scaled up to 6 worker nodes and it is up and running so that we can deploy applications.

Finally, let's copy the just created EKS kubeconfig file into AWS Cloud9 environment to be able to use kubectl on the AWS Clou9 environment when needed on the next steps. As such, you may exit the ssh session established with the EC2 instance managing the EKS Distro cluster, and execute the commands below on the AWS Cloud9 terminal instead.


Remember to use the value from the CdkEksDistroStack.kubeconfigscpcommand key before copying the EKS-D kubeconfig file by using scp.


# EKS Distro kubeconfig setup
mkdir $HOME/.kube
scp -i cdk-eksd-key-pair.pem ec2-user@X.X.X.X:$HOME/.kube/config $HOME/.kube/config
export CONTEXT_EKSD=$(kubectl config view -o jsonpath='{.contexts[0].name}')
mv $HOME/.kube/config $HOME/.kube/eksd.kubeconfig

Great job!! If you made it at this point, your EKS Distro cluster is all set. Next, let’s move forward to the next step which is creating an EKS cluster on AWS cloud.

2. Spinning up an EKS cluster on AWS Cloud

EKS architecture is designed to eliminate any single points of failure that may compromise the availability and durability of the Kubernetes control plane. The Kubernetes control plane managed by EKS runs inside an EKS managed VPC. The EKS control plane comprises the Kubernetes API server nodes, etcd cluster. Kubernetes API server nodes that run components like the API server, scheduler, and kube-controller-manager run in an auto-scaling group. EKS runs a minimum of two API server nodes in distinct Availability Zones (AZs) within an AWS region. Likewise, for durability, the etcd server nodes also run in an auto-scaling group that spans three AZs. EKS runs a NAT Gateway in each AZ, and API servers and etcd servers run in a private subnet. This architecture ensures that an event in a single AZ doesn’t affect the EKS cluster's availability. When you create a new cluster, Amazon EKS creates a highly-available endpoint for the managed Kubernetes API server that you use to communicate with your cluster (using tools like kubectl). The managed endpoint uses NLB to load balance Kubernetes API servers. EKS also provisions two ENIs in different AZs to facilitate communication to your worker nodes.

Creating EKS clusters on AWS Cloud is a simple and straightforward process. There are different ways of accomplish this task(e.g., AWS Console, AWS CLI, AWS CDK, etc), but in this example we’ll utilize eksctl which is a CLI tool written in Go for creating EKS clusters on AWS by leveraging Cloudformation to create all infrastructure required(e.g. VPC, subnets, load balancing, internet gateway, auto scaling groups, etc) to get an EKS cluster up and running in just a matter of minutes. More details on how to configure eksctl can be found here.

In this section, we will create the following architecture with an EKS cluster as depicted in the diagram below. The goal is to spin up a new EKS cluster “flavor” to demonstrate how we can manage it together with the previously created EKS Distro. As such, run the following commands to get started with the cluster deployment:

## Installing eksctl
curl "$(uname -s)_amd64.tar.gz" \
    --silent --location \
    | tar xz -C $HOME/bin

export AWS_REGION=$(curl -s | jq -r '.region')
export EKS_CLUSTER_NAME=eks-dev
eksctl create cluster --name=$EKS_CLUSTER_NAME --nodes=6 --region=$AWS_REGION


Just go and grab some coffee as this step usually takes about 10-15 minutes to complete.


Upon cluster creation, you can test if your installation has successfully completed by running kubectl get nodes. It should return an output similar to the following:


Let’s consolidate the 2 cluster configurations into $HOME/.kube/config, which is the first location used by kubectl to find the information it needs to choose a cluster and communicate with the API server of that cluster. As such, run the following commands:

export CONTEXT_EKS=$(kubectl config view -o jsonpath='{.contexts[0].name}')
mv $HOME/.kube/config $HOME/.kube/$EKS_CLUSTER_NAME.kubeconfig
export KUBECONFIG=$HOME/.kube/eksd.kubeconfig:$HOME/.kube/$EKS_CLUSTER_NAME.kubeconfig
kubectl config view --merge --flatten > $HOME/.kube/config

Let's rename the context from the kubeconfig file to use more friendly names instead:

kubectl config rename-context ${CONTEXT_EKSD} EKS-D
kubectl config rename-context ${CONTEXT_EKS} EKS


Now, display the list of kubernetes contexts which include the 2 EKS clusters created so far.

kubectl config get-contexts


3. Building and Deploying a REST API with Node.js, Express, and Amazon DocumentDB

Once setup process has completely finished, we can move on and get started with the sample application provisioning which consists of an API written in Node.js. This API will be deployed on top of the 3 EKS clusters to consume a Movies collection stored in Amazon DocumentDB and expose that dataset to be consumed by any external client via REST interface. Here is the API architecture diagram:

Sample application image


a. Amazon DocumentDB configuration

a.1 - Cluster and Instance setup

In this step, the shell script below will perform the following tasks:

  • create a security group for the Movies DB in Amazon DocumentDB and assign an inbound rule for the cluster port
  • create an Amazon DocumentDB cluster with a new instance
  • install the public key to be able to access the Amazon DocumentDB cluster through TLS with mongosh
  • expose the output environment variable DBCLUSTER_CONNECTION_STRING to be utilized to connect to Amazon DocumentDB using mongosh in the step below. Also, its value will be used as the connection string in the ../node-rest-api/app.js file.
  • install mongo shell on the AWS Cloud9 environment to test connectivity to the Amazon DocumentDB cluster through TLS

Please note that you may change parameters like cluster name and port(default 27017), master username and password, and security group name, etc before running this script.

Script to set up Amazon DocumentDB
export DBCLUSTER_NAME=moviesdb-cluster
export DBCLUSTER_PORT=27017
export DBCLUSTER_SECURITY_GROUP_NAME=MoviesDBClusterSecurityGroup

# Create a security group for the Movies DB in Amazon DocumentDB and assign an inbound rule for the cluster port
export DBCLUSTER_SECURITY_GROUP_ID=$(aws ec2 create-security-group --group-name $DBCLUSTER_SECURITY_GROUP_NAME --description "DocumentDB cluster security group" | jq -r '.GroupId')
aws ec2 authorize-security-group-ingress \
    --protocol tcp \
    --port $DBCLUSTER_PORT \

# The following command create-db-cluster creates an Amazon DocumentDB cluster
aws docdb create-db-cluster \
    --db-cluster-identifier $DBCLUSTER_NAME \
    --engine docdb \
    --master-username $DBCLUSTER_MASTERUSERNAME \
    --master-user-password $DBCLUSTER_MASTERUSERPWD \
    --vpc-security-group-ids $DBCLUSTER_SECURITY_GROUP_ID

export DBCLUSTER_ENDPOINT=$(aws docdb describe-db-clusters --db-cluster-identifier $DBCLUSTER_NAME | jq -r '.DBClusters[].Endpoint')
export DBCLUSTER_PORT=$(aws docdb describe-db-clusters --db-cluster-identifier $DBCLUSTER_NAME | jq -r '.DBClusters[].Port')

# Creates a new instance in the Amazon DocumentDB cluster
aws docdb create-db-instance \
    --db-cluster-identifier $DBCLUSTER_NAME \
    --db-instance-class db.r5.xlarge \
    --db-instance-identifier $DBCLUSTER_NAME-instance \
    --engine docdb
# To encrypt data in transit and use TLS to access Amazon DocumentDB we need to download the public key from below the location below into the node-rest-api folder
wget -P ../node-rest-api/ 

# This variable will be used to connect to Amazon DocumentDB using mongosh and will replace the Mongoose connection string in the ../node-rest-api/app.js file

# Installing mongo shell on the AWS Cloud9 environment
tar -xvf mongosh-1.1.7-linux-x64.tgz
sudo cp mongosh-1.1.7-linux-x64/bin/mongosh /usr/local/bin/ 
## Amazon DocumentDB setup
cd $HOME_REPO/documentdb 
chmod +x

Upon a successful Amazon DocumentDB cluster provisioning(around 3-5 minutes), your database setup should look like this:


a.2 - VPC peering configuration

As previously shown in the overall architeture, you may notice that EKS clusters were provisioned in a different VPC each one. Therefore, in order to properly connect each EKS cluster with the Amazon DocumentDB database instance, we'll utilize VPC peering configuration to allow communication flow between the Node REST API VPC's and the Amazon DocumentDB VPC. However, you don't need to worry about setting up route tables, CIDR blocks, and VPC peering itself as I have automated every step for a smoother and straightforward experience. Therefore, you just need to perform the following script so that you're all set:

## VPC peering configuration
cd $HOME_REPO/documentdb 
chmod +x


a.3 - Connecting to the cluster instance

Mongo Shell is a command-line utility that may be utilized to connect and query your Amazon DocumentDB cluster. Now, we'll connect to the Amazon DocumentDB by using mongosh and the DBCLUSTER_CONNECTION_STRING environment variable output from the shell script executed in the previous step. You may find more details about the connection string format and how to programmatically connect to Amazon DocumentDB here.



a.4 - Creating the Sample Movies Collection

Now that you were able to connect to the Amazon DocumentDB instance, let’s run a script dbmovies.js to ensure we can work and manipulate data to be properly consumed by a REST API as it will be explained in the next section. The script below uses insertMany() to update an Amazon DocumentDB instance and then uses db.collection.find() to verify the records were added into the collection accordingly.

Script to populate the Sample DB Movies Collection
db.movies.insertMany( [
      title: 'Titanic',
      year: 1997,
      genres: [ 'Drama', 'Romance' ]
      title: 'Spirited Away',
      year: 2001,
      genres: [ 'Animation', 'Adventure', 'Family' ]
      title: 'Casablanca',
      genres: [ 'Drama', 'Romance', 'War' ]
      title: 'Avatar',
      year: 2009,
      genres: [ 'Action', 'Adventure', 'Fantasy' ]
      title: 'The Avengers',
      year: 2012,
      genres: [ 'Action', 'Sci-Fi', 'Thriller' ]
] )
printjson( db.movies.find( {} ) );

Now, run the command below to populate the Sample DB Movies Collection to load with initial data to be consumed by the Node REST API that we will build in the next section

## Create DB Movies collection
mongosh $DBCLUSTER_CONNECTION_STRING --file $HOME_REPO/documentdb/dbmovies.js


You have successfully set up Amazon DocumentDB and can manage collections and documents as needed. You may find more details on how to set up a DocumentDB cluster here and how to work with Mongo shell at Write Scripts for mongosh.

Next, let's get started building the REST API to consume the Movies collection that we just created.

b. Movies REST API configuration

b.1 - Building the Node REST API

Dockerfile describes how to build the Docker container image, and specifies which libraries(e.g.: express, mongoose, etc) to be installed to host the Node.js application. In this example, I utilize Mongoose, an Object Data Modeling (ODM) library for MongoDB, to access the movie tables created and populated in the previous step. Since Amazon DocumentDB is compatible with MongoDB, you can transparently use this framework to connect to it as well.

Mongoose Application Data Model
const mongoose = require('mongoose');


const moviesSchema = mongoose.Schema({

  title: {
    type: String,
    required: true

  year: {
    type: Number,
    required: true,
    validate: {
      validator: Number.isInteger,
      message: '{VALUE} is not an integer value'
  genres: {
    type: [String],
    required: true

module.exports = mongoose.model('movies', moviesSchema);

This REST API allows to perform the following operations against the Movies collection:

  • get all movies
  • add movies
  • get a specific movie
  • delete movies
  • update movies
Node REST API routes
const express = require('express');
const router = express.Router();
const Movie = require('../models/movies-model');

// get all movies
router.get('/', async (req, res) => {
  try {
    console.log('** Get All Movies API invocation **');
    const movie = await Movie.find();
  catch (err) {
    res.json({ message: err })

// add movie'/', async (req, res) => {
  const movie = new Movie({
    title: req.body.title,
    year: req.body.year,
    genres: req.body.genres

  try {
    console.log('** Add movie API invocation **');
    const savedMovie = await;
  } catch (e) {
    res.status(503).json({ message: e });


// get a specific movie
router.get('/:uuid', async (req, res) => {
  try {
    console.log('** Find movie by ID API invocation **');
    const movie = await Movie.findById({ _id: req.params.uuid });
  } catch (e) {
    res.status(503).json({ message: e });

// delete movie
router.delete('/:uuid', async (req, res) => {
  try {
    console.log('** Delete movie API invocation **');
    const removedPost = await Movie.remove({ _id: req.params.uuid })
  catch (e) {
    res.status(503).json({ message: e });

// update movie
router.put('/:uuid', async (req, res) => {
  try {
    const updatedMovie = await Movie.findByIdAndUpdate(req.params.uuid,
          title: req.body.title,
          year: req.body.year,
          genres: req.body.genres
  } catch (e) {
    res.status(503).json({ message: e });

module.exports = router; is a script that uses a Dockerfile to build a container image made up of a Node.js application, working as a REST API to expose the Movies Collection, which is pushed into the Amazon Elastic Container Registry (ECR). The argument we pass into it node-rest-api is used as the ECR repository name. After that, this image will be referenced by the node-rest-api-deployment.yaml to deploy the k8s ReplicationController to initiate the pods where the REST API in Node.js will be running.

Run the following script in the AWS Cloud9 terminal to create the docker container

cd $HOME_REPO/node-rest-api 
chmod +x


Explore the script
#!/usr/bin/env bash

# This script shows how to build the Docker image and push it to ECR to be used 
# within the EKS clusters as a REST API to consume the Amazon DocumentDB Sample Movies collection.

# The first argument to this script is the image name. This will be used as the image on the local
# machine and combined with the account and region to form the repository name for ECR.
# The second argument is the connection string used to connect to Amazon DocumentDB in the app.js file

if [ "$image" == "" ]
    echo "Usage: $0 <image-name>"
    exit 1

if [ "$CONNECTION_STRING" == "" ]
    echo "No connection string has been provided to be used by Mongoose to connect to Amazon DocumentDB."
    exit 1

# Get the account number associated with the current IAM credentials
account=$(aws sts get-caller-identity --query Account --output text)

if [ $? -ne 0 ]
    exit 255

# Get the region defined in the current configuration (default to us-west-2 if none defined)
region=$(aws configure get region)


# If the repository doesn't exist in ECR, create it.
aws ecr describe-repositories --repository-names "${image}" > /dev/null 2>&1

if [ $? -ne 0 ]
    aws ecr create-repository --repository-name "${image}" > /dev/null

# Get the login command from ECR and execute it directly
aws ecr get-login-password --region us-west-2 | docker login --username AWS --password-stdin ${account}.dkr.ecr.${region}

# Replace the Mongoose connection string in the ../node-rest-api/app.js file

# Build the docker image locally with the image name and then push it to ECR
# with the full name.

docker build  -t ${image} .

# After the build completes, it tags the image so that you can push the image to the repository
docker tag ${image} ${fullname}

docker push ${fullname}

# Points the K8S ReplicationController to the image pushed into ECR
# change our delimiter from / to | to avoid escaping issues with the image name which contains /
sed -i "s|YOUR-CONTAINER-IMAGE|$fullname|g" ./k8s/node-rest-api-deployment.yaml

b.2 - Deploying the Node REST API

Here is the kubernetes service configuration to deploy a REST API in Node.js on top of the EKS environments we just created. It exposes the Movies collection created in Amazon DocumentDB. File is located at $HOME_REPO/node-rest-api/k8s/node-rest-api-deployment.yaml

Expand to explore the REST API deployment configuration on EKS
apiVersion: v1
kind: ReplicationController
    name: node-rest-api
  name: node-rest-api-controller
  replicas: 2
    name: node-rest-api
        name: node-rest-api
        name: node-rest-api
        - containerPort: 3000
          name: http-server
apiVersion: v1
kind: Service
  name: node-rest-api
    name: node-rest-api
  type: ClusterIP
    - name: node-rest-api
      port: 3000
      targetPort: 3000
      protocol: TCP
    name: node-rest-api 

## EKS Distro
kubectl apply --context $CONTEXT_EKSD -f $HOME_REPO/node-rest-api/k8s/node-rest-api-deployment.yaml
## EKS
kubectl apply --context $CONTEXT_EKS -f $HOME_REPO/node-rest-api/k8s/node-rest-api-deployment.yaml


After that, you can check on the successfull creation of both Node REST API Service and Replication Controller.

## EKS Distro
kubectl describe svc node-rest-api --context $CONTEXT_EKSD
kubectl describe rc node-rest-api-controller --context $CONTEXT_EKSD 
## EKS
kubectl describe svc node-rest-api --context $CONTEXT_EKS 
kubectl describe rc node-rest-api-controller --context $CONTEXT_EKS   


b.3 - Exposing the Node REST API service for external consumption

After creating the kubernetes K8s node-rest-api service, we will be forwarding a local port on the local machine to a port on the pod where the Movies REST API is exposed (port 3000).

## EKS
kubectl port-forward service/node-rest-api 3001:3000 --context $CONTEXT_EKS &                              

## EKS Distro
kubectl port-forward service/node-rest-api 3003:3000 --context $CONTEXT_EKSD &

b.4 - Consuming the Node REST API

The movies REST API exposes the following endpoint /movie through HTTP methods used in the following script

Script to test HTTP methods exposed by the Movies REST API
export REST_API_PORT=$1

### 1. Get all movies (GET) and format with jq:
echo "*** 1. Get all movies (GET) ***"
curl --silent --location --request GET "localhost:$REST_API_PORT/movie/" \
--header 'Content-Type: application/json'  |  jq '.[]'

### 2. Get a single movie record using its ID (GET). It uses the first record from the array
echo "*** 2. Get a single movie record using its ID (GET). It uses the first record from the array ***"
export MOVIE_ID=$(curl --silent --location --request GET "localhost:$REST_API_PORT/movie/" \
--header 'Content-Type: application/json' |  jq -r '.[0]._id')

curl --silent --location --request GET "localhost:$REST_API_PORT/movie/$MOVIE_ID" \
--header 'Content-Type: application/json' |  jq '.'

### 3. Create a new movie record (POST) and gets its record _id to update it in the next step:
echo "*** 3. Create a new movie record (POST) ***"
export NEW_MOVIE_ID=$(curl --silent --location --request POST "localhost:$REST_API_PORT/movie/" \
--header 'Content-Type: application/json' \
--data-raw '{
   "title": "Toy Story 3",
   "year": 2009,
   "genres": [ "Animation", "Adventure", "Family" ]
}' | jq -r '._id')
echo -e "New movie ID created: $NEW_MOVIE_ID \n"

### 4. Update the movie(change year) record created above (PUT):
echo "*** 4. Update the movie(change year) record created above (PUT) ***"
curl --silent --location --request PUT "localhost:$REST_API_PORT/movie/$NEW_MOVIE_ID" \
--header 'Content-Type: application/json' \
--data-raw '{
   "title": "Toy Story 3",
   "year": 2010,
   "genres": [ "Animation", "Adventure", "Family" ]
}' |  jq '.'

# Returns all movies records and then uses jq to filter and check on the year change
echo "*** Returns all movies records and then uses jq to filter and check on the year change ***"
curl --silent --location --request GET "localhost:$REST_API_PORT/movie/" \
--header 'Content-Type: application/json' |  jq --arg MOVIEID "$NEW_MOVIE_ID" '.[] | select( ._id == $MOVIEID ).year'

### 5. Delete a movie using its ID (DELETE). In this example, the first record will be deleted:
echo "*** 5. Delete a movie using its ID (DELETE). In this example, the first record will be deleted ***"
curl --silent --location --request DELETE "localhost:$REST_API_PORT/movie/$MOVIE_ID" \
--header 'Content-Type: application/json' |  jq '.'

As such, run this script in the AWS Cloud9 terminal to consume the Movies REST API across all the EKS clusters. Also, you can perform additional calls to the API as per your convenience. The script below is only a reference:

chmod +x
## EKS
./ 3001

## EKS Distro
./ 3003



Congratulations!! If you made it to this point, you've finished building, deploying, exposing, and consuming the Movies REST API across all EKS clusters. Next, let's dive deeply on the EKS configuration to connect all clusters in a single dashboard.

4. Monitoring EKS Distro by using EKS Connector (Optional)

After completing the provisioning of the Node REST API in each of the 3 EKS clusters, you should be able to monitor all of them from a single pane of glass as the EKS Connector has been installed for EKS Distro during deployment through its AWS CDK application, which in turn, ensure it will show up on the EKS Console alongside the regular EKS cluster running on AWS. As such, there is no need to run any command shown below as everything has been automated for you. This section is for education purpose only.

By default, EKS on AWS Cloud automatically always shows up on the AWS Console upon creation. After a cluster is connected, you can see the status, configuration, and workloads for that cluster in the Amazon EKS console, regardless where the cluster is running.

Amazon EKS provides an integrated dashboard in the AWS console for connecting, visualizing, and troubleshooting Kubernetes clusters and applications. You can leverage the EKS console to view all of your Kubernetes clusters, including EKS Anywhere and EKS Distro clusters running outside of the AWS cloud, among other versions, thanks to the integration with EKS Connector.

In a nutshell, EKS connector is an agent that runs on a Kubernetes cluster which enables it to register with Amazon EKS by creating a secure data channel using AWS Session Manager to federate external Kubernetes cluster on the EKS Console, including clusters from other cloud providers like Anthos, GKE, AKS, OpenShift, Tanzu, Rancher, to name a few examples.

The cluster can be registered in multiple ways, using AWS CLI, SDK, eksctl, or the AWS console. In this example, I've used eksctl within the initialization scripts src/ during EKS-D cluster building to automate the cluster registration in the AWS Console.

The cluster registration process required two steps:

  1. Registering the cluster with Amazon EKS
  2. Applying a connector YAML manifest file in the target cluster to enable agent connectivity

Below are the permissions required before registering a cluster through an IAM user or role:

  • eks:RegisterCluster
  • ssm:CreateActivation
  • ssm:DeleteActivation
  • iam:PassRole

Registered EKS Connector for EKS Distro


Whenever you connect the EKS Distro cluster in the AWS Management Console through the EKS connector, the eksctl creates the service-linked role AWSServiceRoleForAmazonEKSConnector for you.

Amazon EKS uses the service-linked role named AWSServiceRoleForAmazonEKSConnector, which contains attached policies to allow the role to manage the necessary resources, to connect to the registered Kubernetes cluster. In addition, as in the example above, it has created the Amazon EKS Connector agent IAM role(named eksctl-20220426183213240997 in the command output above) which is used by the EKS Connector agent on the Kubernetes cluster to connect to the AWS Systems Manager as required.

Also, the integration was completed by applying the Amazon EKS Connector manifest file to the EKS Distro cluster. This manifest contains the configurations for the EKS Connector and a proxy agent which are deployed as a StatefulSet on target cluster, EKS Distro.

Checking EKS Connector status for the EKS clusters

After applying the Amazon EKS Connector manifest and role binding YAML files to the EKS target clusters, I confirmed that the cluster was properly set up by looking for two pods(eks-connector-0 and eks-connector-1) with status Running, meaning the EKS connector installation has been successfully completed.

EKS Connector acts as a proxy and forwards the EKS console requests to the Kubernetes API server on the connected cluster. Upon successful registration, you should be able to get the following list on the EKS Console indicating that configuration has successfully completed with the EKS connector installation for the EKS Distro cluster, and, thus, you can now find clusters and their resources within a unified dashboard with visibility across all your Kubernetes environments provided by the EKS Console and the integration with the EKS Connector.


Now, look into EKS Distro to list all nodes from the cluster created earlier: 1 master and 6 worker nodes, as shown in the EKS Console below:

EKS Distro - Nodes


EKS Distro - Node REST API Deployment


5. Walkthrough Demo

Walkthrough Demo


To remove the three Amazon EKS clusters created throughout this example, run the following commands in a terminal on the AWS Cloud9 environment.

Remember to replace X.X.X.X in the script below by the CdkEksdistroStack.EC2PublicIPaddress key which contains the Public IP address of the EC2 instances leveraged to create the EKS Distro cluster.

Type Y to confirm stacks destruction

## Clean up Amazon DocumentDB cluster
cd $HOME_REPO/documentdb
chmod +x
chmod +x
## Clean up EKS Distro
cd $HOME_REPO/cdk/cdk-eksdistro
ssh -i cdk-eksd-key-pair.pem ec2-user@X.X.X.X ./
eksctl deregister cluster --name $EKSCONNECTOR_CLUSTER_NAME
cdk destroy --all

## Clean up EKS
eksctl delete cluster --region=$AWS_REGION --name $EKS_CLUSTER_NAME
## Clean up ECR image and repository
aws ecr batch-delete-image \
     --repository-name "${image}" \
     --image-ids imageTag=latest
aws ecr delete-repository --repository-name "${image}"   


In this project, you learned how to build development and prototype environments for hybrid software delivery using the AWS CDK to automate EKS Distro provisioning for hybrid application development, enabling a smooth experience while standing up development and testing kubernetes environments. Also, the CDK app automated the EKS Connector deployment to create a unified view through the EKS console to monitor EKS Distro development cluster from a single pane of glass. On top of that, you have created a Node REST API service which was utilized to consume and expose a movies collections from a database in Amazon DocumentDB. In turn, this API was provisioned on EKS development environments to showcase deployment consistency regardless the deployment option utilized(EKS-D or EKS) to enable application development for hybrid EKS based environments.

To learn more, see the EKS Distro and EKS Connector documentation.


No description, website, or topics provided.



Code of conduct

Security policy





No releases published


No packages published