Skip to content

Latest commit

 

History

History
105 lines (78 loc) · 6.14 KB

setup-eks-provisioning-pipeline.asciidoc

File metadata and controls

105 lines (78 loc) · 6.14 KB

Setting up the AWS EKS provisioning workflow on GitHub

In this section we will create a workflow which will provision an AWS EKS cluster. This workflow will be configured to be manually triggered by the user. As part of EKS cluster provisioning, a NGINX Ingress controller is deployed and a .env file with the name eks-variables is created in .github folder, which contains, among others, the DNS name of the Ingress controller, that you you will need to add as CNAME record on the domains used in your application Ingress manifest files. Refer to the appendix to retrieve the DNS name of the Ingress controller independently.

The creation of the workflow will follow the project workflow, so a new branch named feature/eks-provisioning will be created, the YAML file for the workflow and the terraform files for creating the cluster will be pushed to it.

Then, a Pull Request (PR) will be created in order to merge the new branch into the appropriate branch (provided in -b flag). The PR will be automatically merged if the repository policies are met. If the merge is not possible, either the PR URL will be shown as output, or it will be opened in your web browser if using -w flag.

The script located at /scripts/pipelines/github/pipeline_generator.sh will automatically create this new branch, create the EKS provisioning workflow based on the YAML template, create the Pull Request and, if it is possible, merge this new branch into the specified branch.

Prerequisites

aws s3 mb <bucket name>
# Example: aws s3 mb s3://terraformStateBucket
  • An AWS IAM user with required permissions to provision the EKS cluster.

  • This script will commit and push the corresponding YAML template into your repository, so please be sure your local repository is up-to-date (i.e you have pulled the latest changes with git pull).

Creating the workflow using provided script

Before executing the workflow generator, you will need to customize some input variables about the environment. Also, you may want to use existing VPC and subnets instead of creating new ones. To do so, you can either edit terraform.tfvars file or take advantage of the set-terraform-variables.sh script located at /scripts/environment-provisioning/aws/eks, which allows you to create or update values for the required variables, passing them as flags.

Example: creating a new VPC on cluster creation:

./set-terraform-variables.sh --region <region name> --instance_type <workers instance type> --vpc_name <vpc name> --vpc_cidr_block <vpc cidr block>

Example: reusing existing VPC and subnets:

./set-terraform-variables.sh --region <region name> --instance_type <workers instance type> --existing_vpc_id <vpc id> --existing_vpc_private_subnets <array of subnet ids>
  • Rancher is installed by default on the cluster after provisioning. If you wish to change this, please update eks-pipeline.cfg accordingly.

Usage

pipeline_generator.sh \
  -c <config file path> \
  -n <pipeline name> \
  -d <project local path> \
  --cluster-name <cluster name> \
  --s3-bucket <s3 bucket name> \
  --s3-key-path <s3 key path> \
  [-b <branch>] \
  [-w]
Note
The config file for the EKS provisioning workflow is located at /scripts/pipelines/github/templates/eks/eks-pipeline.cfg.

Flags

-c, --config-file        [Required] Configuration file containing workflow definition.
-n, --pipeline-name      [Required] Name that will be set to the workflow.
-d, --local-directory    [Required] Local directory of your project (the path should always be using '/' and not '\').
    --cluster-name       [Required] Name for the cluster."
    --s3-bucket          [Required] Name of the S3 bucket where the Terraform state of the cluster will be stored.
    --s3-key-path        [Required] Path within the S3 bucket where the Terraform state of the cluster will be stored.
-b, --target-branch                 Name of the branch to which the Pull Request will target. PR is not created if the flag is not provided.
-w                                  Open the Pull Request on the web browser if it cannot be automatically merged. Requires -b flag.

Example

./pipeline_generator.sh -c ./templates/eks/eks-pipeline.cfg -n eks-provisioning -d C:/Users/$USERNAME/Desktop/quarkus-project --cluster-name hangar-eks-cluster --s3-bucket terraformStateBucket --s3-key-path eks/state -b develop -w

Appendix: Interacting with the cluster

First, generate a kubeconfig file for accessing the AWS EKS cluster:

aws eks update-kubeconfig --name <cluster name> --region <aws region>

Now you can use kubectl tool to communicate with the cluster.

To enable an IAM user to connect to the EKS cluster, please refer here.

To get the DNS name of the NGINX Ingress controller on the EKS cluster, run the below command:

kubectl get svc --namespace nginx-ingress nginx-ingress-nginx-ingress-controller -o jsonpath={.status.loadBalancer.ingress[0].hostname}

Rancher will be available on https://<ingress controller domain>/dashboard.

Appendix: Rancher resources