Skip to content

Commit

Permalink
New files.
Browse files Browse the repository at this point in the history
These files enable the creation of the EKS cluster accordingly. There is room for improvement which will be reflected in upcoming commits/pull requests.
  • Loading branch information
abdullahgarcia committed Apr 22, 2021
1 parent cf18068 commit 9e9ec68
Show file tree
Hide file tree
Showing 8 changed files with 266 additions and 1 deletion.
11 changes: 10 additions & 1 deletion aws/eks/eks-terraform-scripts/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@

## Requirements

Before running the Terraform scripts, make sure to set the [IAM required permissions](./iam-required-permissions.md) first. You will have to remove the comments in the code if you decide to copy/paste them.
Before running the Terraform scripts, make sure to set the [IAM required permissions](./iam-required-permissions.md) first. You will have to remove the comments in the code if you decide to copy/paste them. Remember it is a best practice to use a role and assign this permissions as a managed policy rather than inline.

## Instructions

Expand All @@ -28,3 +28,12 @@ yes
```shell
terraform apply
```
7. At this point you can configure **kubectl**:
```shell
aws eks --region $(terraform output -raw region) update-kubeconfig --name $(terraform output -raw cluster_name)

This comment has been minimized.

Copy link
@cphang99

cphang99 Aug 25, 2021

Is there scope for testing this whole process with CI?

This comment has been minimized.

Copy link
@abdullahgarcia

abdullahgarcia Aug 29, 2021

Author Contributor

Yes: https://github.com/finos/cloud-service-certification/blob/main/README.md#approach-and-proposed-solution

Basically, we'll be testing to make sure that:

  • The defined Terraform file meets standards, including security, and reflects on the controls and architecture best practices.
  • The Terraform file outputs meet standards, including security, and reflects on the controls and architecture best practices.
```
8. What you decide to do next is up to you; the cluster is ready for you to work with it.
9. Clean up:
```shell
terraform destroy
```
50 changes: 50 additions & 0 deletions aws/eks/eks-terraform-scripts/eks-cluster.tf
Original file line number Diff line number Diff line change
@@ -0,0 +1,50 @@
resource "aws_kms_key" "eks" {
description = "EKS secret encryption key."
}

module "eks" {
source = "terraform-aws-modules/eks/aws"
cluster_name = local.cluster_name
cluster_version = "1.18"

This comment has been minimized.

Copy link
@cphang99

cphang99 Aug 25, 2021

The last time I used the eks module, the cluster_version supported was pretty dependent on the version of the eks terraform module that was used. Might be worth documenting what versions are supported.

This comment has been minimized.

Copy link
@abdullahgarcia

abdullahgarcia Aug 29, 2021

Author Contributor

We can discuss how to handle this.

subnets = module.vpc.private_subnets

cluster_encryption_config = [
{
provider_key_arn = aws_kms_key.eks.arn
resources = ["secrets"]
}
]

tags = {
"environment" = "cloud-service-certification"

This comment has been minimized.

Copy link
@cphang99

cphang99 Aug 25, 2021

Does this want to be configurable?

This comment has been minimized.

Copy link
@abdullahgarcia

abdullahgarcia Aug 29, 2021

Author Contributor

Yes, we want to ensure that any created infrastructure, services, etc. is identifiable. However, feedback is welcomed.

}

vpc_id = module.vpc.vpc_id

workers_group_defaults = {
root_volume_type = "gp2"
}

worker_groups = [

This comment has been minimized.

Copy link
@cphang99

cphang99 Aug 25, 2021

It might be worth making this a little more configurable. Some potential areas where more flexibility could be introduced?

This comment has been minimized.

Copy link
@cphang99

cphang99 Aug 25, 2021

Separately what expectations are being put on users to manage ingress to nodes and/or nodegroups on the cluster? Loadbalancer kubernetes services will autoprovision ALB/ELB/NLBs as necessary, with the downside that additional management of DNS records is often required, as those AWS resources get destroyed as soon as the kubernetes service is removed. This seems to be also consistent with the current vpc configuration with vpc subnets being tagged as documented here

On the other hand, another scenario is to setup loadbalancers as part of the terraform configuration, with target groups mapped to nodegroups and route53 records mapped to the loadbalancers. This then allows users to consider using Nodeport services and have granular network configuration for each nodegroup. The downside is that potentially there are separation of concerns, as the purpose of this terraform is to create EKS clusters and you may want to minimise having too much additional AWS infrastructure to setup.

This comment has been minimized.

Copy link
@abdullahgarcia

abdullahgarcia Aug 29, 2021

Author Contributor

Making cluster workers more configurable only speaks to the computing capacity and the workloads to be managed in a cluster. I'm happy to discuss this.

I think a discussion about Ingress and the Kubernetes service type LoadBalancer in the cluster is worth having.

{
name = "worker-group-1"
instance_type = "t2.small"
asg_desired_capacity = 2
additional_security_group_ids = [aws_security_group.worker_group_one.id]
},
{
name = "worker-group-2"
instance_type = "t2.medium"
asg_desired_capacity = 1
additional_security_group_ids = [aws_security_group.worker_group_two.id]
},
]
}

data "aws_eks_cluster" "cluster" {
name = module.eks.cluster_id
}

data "aws_eks_cluster_auth" "cluster" {
name = module.eks.cluster_id
}
18 changes: 18 additions & 0 deletions aws/eks/eks-terraform-scripts/kubernetes-admin.rbac.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,18 @@
apiVersion: v1
kind: ServiceAccount
metadata:
name: admin-user
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: admin-user
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: admin-user
namespace: kube-system
19 changes: 19 additions & 0 deletions aws/eks/eks-terraform-scripts/kubernetes.tf
Original file line number Diff line number Diff line change
@@ -0,0 +1,19 @@
# The Kubernetes provider is included in this file so the EKS module can complete successfully.
# Otherwise, it throws an error when creating `kubernetes_config_map.aws_auth`.
# You should **not** schedule deployments and services in this workspace.
# This keeps workspaces modular (one for provision EKS and another for scheduling Kubernetes resources) as per best practices.

provider "kubernetes" {
host = data.aws_eks_cluster.cluster.endpoint
cluster_ca_certificate = base64decode(data.aws_eks_cluster.cluster.certificate_authority.0.data)
exec {
api_version = "client.authentication.k8s.io/v1alpha1"
command = "aws"
args = [
"eks",
"get-token",
"--cluster-name",
data.aws_eks_cluster.cluster.name
]
}
}
34 changes: 34 additions & 0 deletions aws/eks/eks-terraform-scripts/outputs.tf
Original file line number Diff line number Diff line change
@@ -0,0 +1,34 @@
output "cluster_id" {
description = "EKS cluster ID."
value = module.eks.cluster_id
}

output "cluster_endpoint" {
description = "Endpoint for EKS control plane."
value = module.eks.cluster_endpoint
}

output "cluster_security_group_id" {
description = "Security group IDs attached to the cluster control plane."
value = module.eks.cluster_security_group_id
}

output "kubectl_config" {
description = "kubectl config as generated by the module."
value = module.eks.kubeconfig
}

output "config_map_aws_auth" {
description = "A Kubernetes configuration to authenticate to this EKS cluster."
value = module.eks.config_map_aws_auth
}

output "region" {
description = "AWS region."
value = var.region
}

output "cluster_name" {
description = "Kubernetes cluster name."
value = local.cluster_name
}
46 changes: 46 additions & 0 deletions aws/eks/eks-terraform-scripts/security-groups.tf
Original file line number Diff line number Diff line change
@@ -0,0 +1,46 @@
resource "aws_security_group" "worker_group_one" {
name_prefix = "worker_group_one"
vpc_id = module.vpc.vpc_id

ingress {
from_port = 22
to_port = 22
protocol = "tcp"

cidr_blocks = [
"10.0.0.0/8",
]
}
}

resource "aws_security_group" "worker_group_two" {
name_prefix = "worker_group_two"
vpc_id = module.vpc.vpc_id

ingress {
from_port = 22
to_port = 22
protocol = "tcp"

cidr_blocks = [
"192.168.0.0/16",
]
}
}

resource "aws_security_group" "all_workers" {
name_prefix = "all_workers"
vpc_id = module.vpc.vpc_id

ingress {
from_port = 22
to_port = 22
protocol = "tcp"

cidr_blocks = [
"10.0.0.0/8",
"172.16.0.0/12",
"192.168.0.0/16",
]
}
}
35 changes: 35 additions & 0 deletions aws/eks/eks-terraform-scripts/variables.tf
Original file line number Diff line number Diff line change
@@ -0,0 +1,35 @@
variable "region" {
description = "AWS region."
type = string
default = "eu-west-2"
}

variable "enable_nat_gateway" {
description = "Should be true if you want to provision NAT Gateways for each of your private networks."
type = bool
default = true
}

variable "single_nat_gateway" {
description = "Should be true if you want to provision a single shared NAT Gateway across all of your private networks."
type = bool
default = true
}

variable "enable_dns_hostnames" {
description = "Needs to be true to have a functional EKS cluster; it enables DNS hostnames in the VPC."
type = bool
default = true
}

variable "enable_dns_support" {
description = "Needs be true to have a functional EKS cluster; it enables DNS support in the VPC."
type = bool
default = true
}

variable "domain_name_servers" {
description = "List of name servers to configure in /etc/resolv.conf."
type = list(string)
default = ["AmazonProvidedDNS"]
}
54 changes: 54 additions & 0 deletions aws/eks/eks-terraform-scripts/vpc.tf
Original file line number Diff line number Diff line change
@@ -0,0 +1,54 @@
provider "aws" {
region = var.region
}

data "aws_availability_zones" "available" {
state = "available"
}

resource "random_string" "suffix" {
length = 8
special = false
}

locals {
cluster_name = "csc-eks-${random_string.suffix.result}"
}

module "vpc" {
source = "terraform-aws-modules/vpc/aws"
version = "2.66.0"

name = "csc-vpc"
cidr = "10.0.0.0/16"
azs = data.aws_availability_zones.available.names
private_subnets = ["10.0.1.0/24", "10.0.2.0/24", "10.0.3.0/24"]
public_subnets = ["10.0.4.0/24", "10.0.5.0/24", "10.0.6.0/24"]
enable_nat_gateway = var.enable_nat_gateway
single_nat_gateway = var.single_nat_gateway
enable_dns_hostnames = var.enable_dns_hostnames
enable_dns_support = var.enable_dns_support

tags = {
"kubernetes.io/cluster/${local.cluster_name}" = "shared"
}

public_subnet_tags = {
"kubernetes.io/cluster/${local.cluster_name}" = "shared"
"kubernetes.io/role/elb" = "1"
}

private_subnet_tags = {
"kubernetes.io/cluster/${local.cluster_name}" = "shared"
"kubernetes.io/role/internal-elb" = "1"
}
}

resource "aws_vpc_dhcp_options_association" "dns_resolver" {
vpc_id = module.vpc.vpc_id
dhcp_options_id = aws_vpc_dhcp_options.dns_resolver.id
}

resource "aws_vpc_dhcp_options" "dns_resolver" {
domain_name_servers = var.domain_name_servers
}

0 comments on commit 9e9ec68

Please sign in to comment.