This directory includes the deployment of the EKS cluster along with network infrastrucutre which is obviously a hard pre-requisite for aws-load-balancer-controller. I have used a public EKS cluster endpoint for simplifying the process.
This directory includes the deployment of
-
aws-load-balancer-controller as a helm release.
- Policies required for managing AWS load balancers.
- Service Account creation
-
external-ingress as a helm release.
- Policies required for managing route 53 zone.
- Service Account creation
-
Route53 Public Hosted Zone to be managed by
external-dns
. (optional if need to manage existing route53 DNS zone) -
Data sources for reading EKS and network configurations from eks directory deployment.
-
Example Nginx Webservice to finalise the example which includes:
- Kubernetes Deployment
- Kubernetes Ingress with supported
aws-load-balancer-controller
annotations. For complete list of annotations refer at here - Kubernetes Service
-
poicies directory, which includes the json formatt policy for
aws_loadbalancer_controller
. -
Security Groups for allowing inbound-outbound between required entities.
Why Two Directories eks
and Kubernetes-alb-config
?
This is the most reliable way to use the Kubernetes provider together with the AWS provider to create an EKS cluster. By keeping the two providers' resources in separate Terraform states (or separate workspaces using Terraform Cloud), we can limit the scope of changes to either the EKS cluster or the Kubernetes resources. This will prevent dependency issues between the AWS and Kubernetes providers, since terraform's provider configurations must be known before a configuration can be applied
For more details refer to terraform-provider-kubernetes with EKS documentation.
To create and update the resources defined in this Terraform configuration, run the following command in the respective directories in the following sequence.
- It is must to deploy the
eks
directory configuration first.
Pre-requisites: Terraform must be installed, even though the code does not stop you from using any version but>= 1.x
is recommended.
# clone the repository
git clone https://github.com/ishuar/terraform-eks.git
# Change to eks directory for utilising this example of eks cluster can be deployed with other means or even pre-existing
cd examples/cluster_with_alb/eks
terraform init
terraform plan
terraform apply
# Once the EKS is available , change the directory to kubernetes-alb-config
cd ../kubernetes-alb-config
terraform init
terraform plan
terraform apply
To destroy the resources created by this Terraform configuration, run the following command in the respective directories in the following sequence.
cd examples/cluster_with_alb/kubernetes-alb-config # ignore if this is your current directory.
terraform destroy -auto-approve # ignore "-auto-approve" if you don't want to autoapprove.
cd ../eks
terraform destroy -auto-approve # ignore "-auto-approve" if you don't want to autoapprove.
- You have to own a domain for the nginx webservice deployment and verification , in this example I own this domain
worldofcontainers.tk
. - If you have the domain and it is not hosted by AWS , make sure to forward your domain resolution to AWS nameservers. The will be available as an terraform output
zone_name_servers
via this config or in AWS route53 dns zone console view. - Worth reading Setting up ExternalDNS for Services on AWS and aws-load-balancer-controller.