Skip to content

All terraform modules that are related or supporting EKS setup

Notifications You must be signed in to change notification settings

fheinle-mak/terraform-aws-eks

 
 

Repository files navigation

Why

To spin up complete eks with all necessary components. Those include:

  • vpc
  • eks cluster
  • alb ingress controller
  • fluentbit
  • external secrets
  • metrics to cloudwatch

How to run

data "aws_availability_zones" "available" {}

locals {
    vpc_name = "dasmeta-prod-1"
    cidr     = "172.16.0.0/16"
    availability_zones = data.aws_availability_zones.available.names
    private_subnets = ["172.16.1.0/24", "172.16.2.0/24", "172.16.3.0/24"]
    public_subnets  = ["172.16.4.0/24", "172.16.5.0/24", "172.16.6.0/24"]
    cluster_enabled_log_types = ["audit"]

    # When you create EKS, API server endpoint access default is public. When you use private this variable value should be equal false.
    cluster_endpoint_public_access = true
    public_subnet_tags = {
        "kubernetes.io/cluster/production"  = "shared"
        "kubernetes.io/role/elb"            = "1"
    }
    private_subnet_tags = {
        "kubernetes.io/cluster/production"  = "shared"
        "kubernetes.io/role/internal-elb"   = "1"
    }
   cluster_name = "your-cluster-name-goes-here"
  alb_log_bucket_name = "your-log-bucket-name-goes-here"

  fluent_bit_name = "fluent-bit"
  log_group_name  = "fluent-bit-cloudwatch-env"
}


# Minimum

module "cluster_min" {
  source  = "dasmeta/eks/aws"
  version = "0.1.1"

  cluster_name        = local.cluster_name
  users               = local.users
  vpc_name            = local.vpc_name
  cidr                = local.cidr
  availability_zones  = local.availability_zones
  private_subnets     = local.private_subnets
  public_subnets      = local.public_subnets
  public_subnet_tags  = local.public_subnet_tags
  private_subnet_tags = local.private_subnet_tags
}


# Max @TODO: the max param passing setup needs to be checked/fixed

module "cluster_max" {
  source  = "dasmeta/eks/aws"
  version = "0.1.1"

  ### VPC
  vpc_name              = local.vpc_name
  cidr                  = local.cidr
  availability_zones    = local.availability_zones
  private_subnets       = local.private_subnets
  public_subnets        = local.public_subnets
  public_subnet_tags    = local.public_subnet_tags
  private_subnet_tags   = local.private_subnet_tags
  cluster_enabled_log_types = local.cluster_enabled_log_types
  cluster_endpoint_public_access = local.cluster_endpoint_public_access

  ### EKS
  cluster_name          = local.cluster_name
  manage_aws_auth       = true

  # IAM users username and group. By default value is ["system:masters"]
  user = [
          {
            username = "devops1"
            group    = ["system:masters"]  
          },
          {
            username = "devops2"
            group    = ["system:kube-scheduler"]  
          },
          {
            username = "devops3"
          }
  ]

  # You can create node use node_group when you create node in specific subnet zone.(Note. This Case Ec2 Instance havn't specific name).
  # Other case you can use worker_group variable.

  node_groups = {
    example =  {
      name  = "nodegroup"
      name-prefix     = "nodegroup"
      additional_tags = {
          "Name"      = "node"
          "ExtraTag"  = "ExtraTag"  
      }

      instance_type   = "t3.xlarge"
      max_capacity    = 1
      disk_size       = 50
      create_launch_template = false
      subnet = ["subnet_id"]
    }
  }

  node_groups_default = {
      disk_size      = 50
      instance_types = ["t3.medium"]
    }

  worker_groups = {
    default = {
      name              = "nodes"
      instance_type     = "t3.xlarge"
      asg_max_size      = 3
      root_volume_size  = 50
    }
  }

  workers_group_defaults = {
    launch_template_use_name_prefix = true
    launch_template_name            = "default"
    root_volume_type                = "gp2"
    root_volume_size                = 50
  }

  ### ALB-INGRESS-CONTROLLER
  alb_log_bucket_name = local.alb_log_bucket_name

  ### FLUENT-BIT
  fluent_bit_name = local.fluent_bit_name
  log_group_name  = local.log_group_name

  # Should be refactored to install from cluster: for prod it has done from metrics-server.tf
  ### METRICS-SERVER
  # enable_metrics_server = false
  metrics_server_name     = "metrics-server"
}

Requirements

Name Version
terraform >= 0.14.11
aws >= 3.31
helm >= 2.4.1

Providers

Name Version
aws >= 3.31

Modules

Name Source Version
alb-ingress-controller ./modules/aws-load-balancer-controller n/a
cloudwatch-metrics ./modules/cloudwatch-metrics n/a
eks-cluster ./modules/eks n/a
external-secrets ./modules/external-secrets n/a
fluent-bit ./modules/fluent-bit n/a
metrics-server ./modules/metrics-server n/a
sso-rbac ./modules/sso-rbac n/a
vpc ./modules/vpc n/a
weave-scope ./modules/weave-scope n/a

Resources

Name Type
aws_region.current data source

Inputs

Name Description Type Default Required
account_id n/a string n/a yes
alb_log_bucket_name n/a string "" no
alb_log_bucket_path ALB-INGRESS-CONTROLLER string "" no
availability_zones List of VPC availability zones, e.g. ['eu-west-1a', 'eu-west-1b', 'eu-west-1c']. list(string) n/a yes
bindings Variable which describes group and role binding
list(object({
group = string
namespace = string
roles = list(string)

}))
[] no
cidr CIDR ip range. string n/a yes
cluster_enabled_log_types A list of the desired control plane logs to enable. For more information, see Amazon EKS Control Plane Logging documentation (https://docs.aws.amazon.com/eks/latest/userguide/control-plane-logs.html) list(string)
[
"audit"
]
no
cluster_endpoint_public_access n/a bool true no
cluster_name Creating eks cluster name. string n/a yes
cluster_version Allows to set/change kubernetes cluster version, kubernetes version needs to be updated at leas once a year. Please check here for available versions https://docs.aws.amazon.com/eks/latest/userguide/kubernetes-versions.html string "1.22" no
enable_metrics_server METRICS-SERVER bool false no
enable_sso_rbac Enable SSO RBAC integration or not bool false no
external_secrets_namespace The namespace of external-secret operator string "kube-system" no
fluent_bit_name FLUENT-BIT string "" no
log_group_name n/a string "" no
manage_aws_auth n/a bool true no
map_roles Additional IAM roles to add to the aws-auth configmap.
list(object({
rolearn = string
username = string
groups = list(string)
}))
[] no
metrics_server_name n/a string "metrics-server" no
node_groups Map of EKS managed node group definitions to create any
{
"default": {
"desired_size": 2,
"instance_types": [
"t3.medium"
],
"max_size": 4,
"min_size": 2
}
}
no
node_groups_default Map of EKS managed node group default configurations any
{
"disk_size": 50,
"instance_types": [
"t3.medium"
]
}
no
node_security_group_additional_rules n/a any
{
"ingress_cluster_8443": {
"description": "Metric server to node groups",
"from_port": 8443,
"protocol": "tcp",
"source_cluster_security_group": true,
"to_port": 8443,
"type": "ingress"
}
}
no
private_subnet_tags n/a map(any) {} no
private_subnets Private subnets of VPC. list(string) n/a yes
public_subnet_tags n/a map(any) {} no
public_subnets Public subnets of VPC. list(string) n/a yes
roles Variable describes which role will user have K8s
list(object({
actions = list(string)
resources = list(string)
}))
[] no
users n/a any n/a yes
vpc_name Creating VPC name. string n/a yes
weave_scope_config Weave scope namespace configuration variables
object({
create_namespace = bool
namespace = string
annotations = map(string)
ingress_host = string
ingress_class = string
ingress_name = string
service_type = string
weave_helm_release_name = string
})
{
"annotations": {},
"create_namespace": true,
"ingress_class": "",
"ingress_host": "",
"ingress_name": "weave-ingress",
"namespace": "meta-system",
"service_type": "NodePort",
"weave_helm_release_name": "weave"
}
no
weave_scope_enabled Weather enable Weave Scope or not bool false no
worker_groups Worker groups. any {} no
workers_group_defaults Worker group defaults. any
{
"launch_template_name": "default",
"launch_template_use_name_prefix": true,
"root_volume_size": 50,
"root_volume_type": "gp2"
}
no

Outputs

Name Description
cluster_certificate EKS cluster certificate used for authentication/access in helm/kubectl/kubernetes providers
cluster_host EKS cluster host name used for authentication/access in helm/kubectl/kubernetes providers
cluster_iam_role_name n/a
cluster_id n/a
cluster_primary_security_group_id n/a
cluster_security_group_id n/a
cluster_token EKS cluster token used for authentication/access in helm/kubectl/kubernetes providers
eks_auth_configmap n/a
eks_module n/a
eks_oidc_root_ca_thumbprint Grab eks_oidc_root_ca_thumbprint from oidc_provider_arn.
map_user_data n/a
oidc_provider_arn ## CLUSTER
role_arns n/a
role_arns_without_path n/a
vpc_cidr_block The cidr block of the vpc
vpc_default_security_group_id The ID of default security group created for vpc
vpc_id The newly created vpc id
vpc_nat_public_ips The list of elastic public IPs for vpc
vpc_private_subnets The newly created vpc private subnets IDs list
vpc_public_subnets The newly created vpc public subnets IDs list

About

All terraform modules that are related or supporting EKS setup

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Languages

  • HCL 87.0%
  • Python 10.4%
  • Shell 1.4%
  • Smarty 1.1%
  • JavaScript 0.1%