Skip to content

Commit

Permalink
Merge pull request #72 from cookpad/cluster-documentation
Browse files Browse the repository at this point in the history
Documentation for the cluster module
  • Loading branch information
errm committed Mar 19, 2020
2 parents 1258657 + e30123b commit 2a7d79e
Show file tree
Hide file tree
Showing 6 changed files with 164 additions and 62 deletions.
10 changes: 3 additions & 7 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,12 +17,6 @@ minimal extra configuration, for example for testing and development purposes.


```hcl
provider "aws" {
region = "us-east-1"
version = "~> 2.52"
}
module "eks" {
source = "cookpad/eks/aws"
Expand All @@ -31,11 +25,13 @@ module "eks" {
availability_zones = ["us-east-1a", "us-east-1b", "us-east-1c"]
}
```
[see example](./examples/eks)
[see example](./examples/eks/main.tf)

For more advanced uses, we recommend that you construct and configure
your clusters using the modules contained within the [`modules`](./modules) folder.

[see example](./examples/cluster/main.tf)

This allows for much more flexibility, in order to for example:

* Provision a cluster in an existing VPC.
Expand Down
44 changes: 44 additions & 0 deletions examples/cluster/environment.tf
Original file line number Diff line number Diff line change
@@ -0,0 +1,44 @@
# In the test we provision the network and IAM resources using the environment
# module, we then lookup the relevant config here!
# This is in order to simulate launching a cluster in an existing VPC!

locals {
availability_zones = toset(["us-east-1a", "us-east-1b", "us-east-1c"])
vpc_config = {
vpc_id = data.aws_vpc.network.id
public_subnet_ids = { for subnet in data.aws_subnet.public : subnet.availability_zone => subnet.id }
private_subnet_ids = { for subnet in data.aws_subnet.private : subnet.availability_zone => subnet.id }
}

iam_config = {
service_role = "eksServiceRole-${var.cluster_name}"
node_role = "EKSNode-${var.cluster_name}"
admin_role = "EKSAdmin-${var.cluster_name}"
}
}

data "aws_vpc" "network" {
tags = {
Name = var.cluster_name
}
}

data "aws_subnet" "public" {
for_each = local.availability_zones

availability_zone = each.value
vpc_id = data.aws_vpc.network.id
tags = {
Name = "${var.cluster_name}-public-${each.value}"
}
}

data "aws_subnet" "private" {
for_each = local.availability_zones

availability_zone = each.value
vpc_id = data.aws_vpc.network.id
tags = {
Name = "${var.cluster_name}-private-${each.value}"
}
}
51 changes: 2 additions & 49 deletions examples/cluster/main.tf
Original file line number Diff line number Diff line change
Expand Up @@ -3,60 +3,13 @@ provider "aws" {
version = "2.52.0"
}

data "aws_vpc" "network" {
tags = {
Name = var.cluster_name
}
}

locals {
availability_zones = toset(["us-east-1a", "us-east-1b", "us-east-1c"])
}

data "aws_subnet" "public" {
for_each = local.availability_zones

availability_zone = each.value
vpc_id = data.aws_vpc.network.id
tags = {
Name = "${var.cluster_name}-public-${each.value}"
}
}

data "aws_subnet" "private" {
for_each = local.availability_zones

availability_zone = each.value
vpc_id = data.aws_vpc.network.id
tags = {
Name = "${var.cluster_name}-private-${each.value}"
}
}

module "cluster" {
source = "../../modules/cluster"

name = var.cluster_name

vpc_config = {
vpc_id = data.aws_vpc.network.id
public_subnet_ids = {
us-east-1a = data.aws_subnet.public["us-east-1a"].id
us-east-1b = data.aws_subnet.public["us-east-1b"].id
us-east-1c = data.aws_subnet.public["us-east-1c"].id
}
private_subnet_ids = {
us-east-1a = data.aws_subnet.private["us-east-1a"].id
us-east-1b = data.aws_subnet.private["us-east-1b"].id
us-east-1c = data.aws_subnet.private["us-east-1c"].id
}
}

iam_config = {
service_role = "eksServiceRole-${var.cluster_name}"
node_role = "EKSNode-${var.cluster_name}"
admin_role = "EKSAdmin-${var.cluster_name}"
}
vpc_config = local.vpc_config
iam_config = local.iam_config

aws_auth_role_map = [
{
Expand Down
109 changes: 109 additions & 0 deletions modules/cluster/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,109 @@
# cluster module

This module provisions an EKS cluster, including the EKS Kubernetes control
plane, and several important cluster services (critial addons), and nodes to
run these services.

It will **not** provision any nodes that can be used to run non cluster services.
You will need to provision nodes for your workloads separately using the `asg_node_group` module.

## Usage

```hcl
module "cluster" {
source = "cookpad/eks/aws//modules/cluster"
name = "sal-9000"
vpc_config = module.vpc.config
iam_config = module.iam.config
}
```

[see example](./examples/cluster/main.tf)

## Features

* Provision a Kubernetes Control Plane by creating and configuring an EKS cluster.
* Configure cloudwatch logging for the control plane
* Configures [envelope encryption](https://aws.amazon.com/about-aws/whats-new/2020/03/amazon-eks-adds-envelope-encryption-for-secrets-with-aws-kms/) for Kubernetes secrets with KMS
* Provisions a node group dedicated to running critical cluster level services:
* [cluster-autoscaler](https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler)
* [metrics-server](https://github.com/kubernetes-sigs/metrics-server)
* [prometheus-node-exporter](https://github.com/prometheus/node_exporter)
* [aws-node-termination-handler](https://github.com/aws/aws-node-termination-handler)
* Configures EKS [cluster authentication](https://docs.aws.amazon.com/eks/latest/userguide/managing-auth.html)
* Provisions security groups for node to cluster and infra node communication.
* Supports [IAM Roles for service accounts](https://docs.aws.amazon.com/eks/latest/userguide/iam-roles-for-service-accounts.html)

## aws-auth mappings

In order to map IAM Roles or Users to Kubernetes groups you can provide config
in `aws_auth_role_map` and `aws_auth_user_map`.

The module automatically adds the node role to the `system:bootstrappers` and
`system:nodes` groups (that are required for nodes to join the cluster).


The admin role is automatically added to the `system:masters` group.
This is required for the module to provision configuration to the cluster
via kubectl.

example:

```hcl
module "cluster" {
source = "cookpad/eks/aws//modules/cluster"
...
aws_auth_role_map = [
{
username = "PowerUser"
role_arn = "arn:aws:iam::123456789000:role/PowerUser"
groups = ["system:masters"]
},
{
username = "ReadOnlyUser"
role_arn = "arn:aws:iam::123456789000:role/ReadonlyUser"
groups = ["system:basic-user"]
}
]
aws_user_role_map = [
{
username = "cookpadder"
role_arn = "arn:aws:iam::123456789000:user/admin/cookpadder"
groups = ["system:masters"]
}
]
```

## Secret encryption

This feature is enabled by default, but may be disabled by setting
`envelope_encryption_enabled = false`

When enabled secrets are automatically encrypted with a Kubernetes-generated
data encryption key, which is then encrypted using a KMS master key.

By default a new KMS customer master key is generated per cluster, but you may
specify the arn of an existing key by setting `kms_cmk_arn`

## Cluster critical add-ons

By default all addons are setup. If you want to disable this behaviour you may
by setting some or all of:

```hcl
cluster_autoscaler = false
metrics_server = false
prometheus_node_exporter = false
aws_node_termination_handler = false
```

Note that setting these values to false will not remove provisioned add-ons
from an existing cluster.

By default if the cluster autoscaler is enabled an IAM role is provisioned to
provide the appropriate permissions to alter managed auto scaling groups. If
you wish to manage this IAM role externally you should set
`cluster_autoscaler_iam_role_arn`
4 changes: 2 additions & 2 deletions modules/iam/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,15 +2,15 @@

This module configures the IAM roles needed to run an EKS cluster.

# Features
## Features

* Configures a service role to be assumed by an EKS cluster.
* Configures a role and instance profile for use by EC2 worker nodes.


This module outputs a config object that may be used to configure the cluster module's `iam_config` variable.

# Usage
## Usage

```hcl
module "iam" {
Expand Down
8 changes: 4 additions & 4 deletions modules/vpc/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@

This module provisions an AWS VPC network that can be used to run EKS clusters.

# Usage
## Usage

```hcl
provider "aws" {
Expand Down Expand Up @@ -56,13 +56,13 @@ module "sal" {
}
```

# Features
## Features

As well as configuring the subnets and route table of the provisioned VPC, this
module also provisions internet and NAT gateways, to provide internet access to
nodes running in all subnets.

# Restrictions
## Restrictions

In order to run an EKS cluster you must create subnets in at least 3 availability
zones.
Expand All @@ -72,7 +72,7 @@ up to 7 subnet pairs.

The size of each subnet is relative to the CIDR block chosen for the VPC.

# Development
## Development

This module is tested by [`test/vpc_test.go`](test/vpc_test.go) which validates
the example configuration in [`examples/vpc`](examples/vpc).
Expand Down

0 comments on commit 2a7d79e

Please sign in to comment.