Skip to content

Commit

Permalink
eks worker group replace coreos with official ami, add test for eks c…
Browse files Browse the repository at this point in the history
…luster
  • Loading branch information
smalltown committed Apr 2, 2019
1 parent 75a7f5f commit 6e6519f
Show file tree
Hide file tree
Showing 54 changed files with 509 additions and 1,045 deletions.
1 change: 1 addition & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -3,4 +3,5 @@
.terraform/
.terratest/
vendor/
.test-data
.DS_Store
4 changes: 2 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -137,10 +137,10 @@ You have completed one Kubernetes cluster the same as below picture, and let me
Vishwakarma include 4 major module:

### aws/network
Create one AWS VPC including private and public subnet, and one ec2 instance called bastion hosts in public subnet, hence, one can access the resource hosting in the private subnet, refer [**Here**](VARIABLES.md#aws/network) for the detail variable inputs
Create one AWS VPC including private and public subnet, and one ec2 instance called bastion hosts in public subnet, hence, one can access the resource hosting in the private subnet, refer [**aws/network**](VARIABLES.md#aws/network) for the detail variable inputs

### aws/eks or aws/elastikube
This module creates the AWS EKS cluster / ElastiKube, Terraform is responsible for the complicated k8s compoments, and it takes about 8~10 minutes to complete, refer [**Here**](VARIABLES.md#master) for the detail variable inputs
This module creates the AWS EKS or ElastiKube, Terraform is responsible for the complicated k8s compoments, and it takes about 10~15 minutes to complete, refer [**Here**](VARIABLES.md#aws/) for the detail variable inputs


### aws/eks-worker-asg or aws/kube-worker
Expand Down
55 changes: 12 additions & 43 deletions VARIABLES.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,14 +11,14 @@
| project | Specific which project service will be hosted | string | vishwakarma | no |
| bastion_ami_id | The AWS AMI id for bastion, if that isn't provided, ubuntu latest ami will be used | string | "" | no |
| bastion_instance_type | The AWS instance type for bastion | string | t2.micro | no |
| bastion_instance_type | The AWS instance type for bastion | string | t2.micro | no |
| bastion_key_name | The AWS EC2 key name for bastion | string | - | yes |
| extra_tags | The AWS EC2 key name for bastion | map | {} | no |
| private_zone | The AWS EC2 key name for bastion | string | false | no |
| extra_tags | Create a private Route53 host zone | map | {} | no |


### outputs

## eks/master
## aws/eks

### inputs
| Name | Description | Type | Default | Required |
Expand All @@ -36,7 +36,7 @@

### outputs

## eks/worker-asg
## aws/eks-worker

### inputs
| Name | Description | Type | Default | Required |
Expand Down Expand Up @@ -79,45 +79,14 @@

### outputs

## eks/worker-spot
## aws/elastikube

### inputs
| Name | Description | Type | Default | Required |
| ------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | :----: | :------: | :------: |
| phase | Specific which phase is used for this eks worker node group | string | dev | no |
| project | Specific which project is used for this eks worker node group | string | vishwakarma | no |
| project | Specific which project is used for this eks worker node group | string | vishwakarma | no |
| ssh_key | The ssh key name for worker node instance | string | - | yes |
| aws_region | The AWS region to host this eks worker node group | string | - | yes |
| vpc_id | The vpc id to host this eks worker ndoe group | string | - | yes |
| aws_az_number | How many AZs want to use | string | 3 | no |
| container_linux_channel | CoreOS release channel for worker node | string | stable | no |
| container_linux_version | CoreOS release version for worker node | string | latest | no |
| cluster_name | The eks cluster name | string | - | yes |
| cluster_endpoint | The eks cluster endpoint | string | - | yes |
| certificate_authority_data | The eks cluster certificate authority data | string | - | yes |
| worker_name | The name for worker node | string | - | yes |
| ec2_type | The ec2 type for worker node | map | - | yes |
| ec2_ami | The ami for worker node | string | "" | no |
| instance_count | The minimal worker node number | string | 1 | no |
| subnet_ids | The subnet ids for worker node to host | list | - | yes |
| sg_ids | The security group IDs to be applied for work node | list | - | yes |
| load_balancers | List of ELBs to attach all worker instances to | list | [] | no |
| target_group_arns | List of target groups arn to attach all worker instances to | list | [] | no |
| container_images | Container images to use | map | - | yes |
| bootstrap_upgrade_cl | Whether to trigger a Container Linux OS upgrade during the bootstrap process | string | true | no |
| ntp_servers | A list of NTP servers to be used for time synchronization on the cluster nodes | list | [] | no |
| kubelet_node_label | A list of NTP servers to be used for time synchronization on the cluster nodes | string | "" | no |
| cloud_provider | The cloud provider to be used for the kubelet | string | aws | no |
| image_re | Regular expression used to extract repo and tag components from image strings | string | /^([^/]+/[^/]+/[^/]+):(.*)$/ | no |
| client_ca_file | The eks cercificate file path | string | /etc/kubernetes/pki/ca.crt | no |
| heptio_authenticator_aws_url | heptio authenticator aws download url | string | https://amazon-eks.s3-us-west-2.amazonaws.com/1.10.3/2018-06-05/bin/linux/amd64/heptio-authenticator-aws | no |
| extra_tags | Extra AWS tags to be applied to created resources | map | {} | no |
| root_volume_type | The type of volume for the root block device | string | gp2 | no |
| root_volume_size | The size of the volume in gigabytes for the root block device | string | 200 | no |
| root_volume_iops | The amount of provisioned IOPS for the root block device | string | 100 | no |
| worker_iam_role | Exist IAM role to use for the instance profiles of worker nodes | string | "" | no |
| s3_bucket | The s3 bucket to store ignition file for EC2 userdata | string | - | yes |
### input

### output

### outputs
## aws/kube-worker

### input

### output
4 changes: 2 additions & 2 deletions examples/aws-iam-authenticator/main.tf
Original file line number Diff line number Diff line change
Expand Up @@ -82,7 +82,7 @@ module "kubernetes" {
module "worker_on_demand" {
source = "../../modules/aws/kube-worker"

name = "${local.cluster_name}"
cluster_name = "${local.cluster_name}"
aws_region = "${var.aws_region}"
kubernetes_version = "${local.kubernetes_version}"
kube_service_cidr = "${var.service_cidr}"
Expand Down Expand Up @@ -120,7 +120,7 @@ module "worker_on_demand" {
module "worker_spot" {
source = "../../modules/aws/kube-worker"

name = "${local.cluster_name}"
cluster_name = "${local.cluster_name}"
aws_region = "${var.aws_region}"
kubernetes_version = "${local.kubernetes_version}"
kube_service_cidr = "${var.service_cidr}"
Expand Down
75 changes: 75 additions & 0 deletions examples/eks-cluster/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,75 @@
# Elastikube Cluster Example

This folder contains a simple Terraform module that deploys resources in [AWS](https://aws.amazon.com/) to demonstrate
how you can use Terratest to write automated tests for your AWS Terraform code. This module deploys AWS VPC with bastion hot, self-hosted Kubernetes with two worker group (spot and on demand instance) [EC2
Instances](https://aws.amazon.com/ec2/) in the AWS region specified in
the `aws_region` variable.

Check out [test/eks_cluster_test.go](/test/eks_cluster_test.go) to see how you can write
automated tests for this module.

**WARNING**: This module and the automated tests for it deploy real resources into your AWS account which can cost you
money.





## Running this module manually

1. Sign up for [AWS](https://aws.amazon.com/).
2. Configure your AWS credentials using one of the [supported methods for AWS CLI
tools](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-started.html), such as setting the
`AWS_ACCESS_KEY_ID` and `AWS_SECRET_ACCESS_KEY` environment variables. If you're using the `~/.aws/config` file for profiles then export `AWS_SDK_LOAD_CONFIG` as "True".
3. Install [Terraform](https://www.terraform.io/) and make sure it's on your `PATH`.

4. Execute below command to setup
```
# initial for sync terraform module and install provider plugins
~$ terraform init
# create the network infrastructure
~$ terraform apply -target=module.network
# create the kubernetes master compoment
~$ terraform apply -target=module.eks
# create the general and spot k8s worker group
~$ terraform apply
```

5. When you're done, execute below command to destroy

```
$ terraform destroy -target=module.worker_on_demand
$ terraform destroy -target=module.worker_spot
$ terraform destroy -target=module.eks
$ terraform destroy -target=module.network
```




## Running automated tests against this module

1. Sign up for [AWS](https://aws.amazon.com/).
2. Configure your AWS credentials using one of the [supported methods for AWS CLI
tools](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-started.html), such as setting the
`AWS_ACCESS_KEY_ID` and `AWS_SECRET_ACCESS_KEY` environment variables. If you're using the `~/.aws/config` file for profiles then export `AWS_SDK_LOAD_CONFIG` as "True".
3. Install [Terraform](https://www.terraform.io/) and make sure it's on your `PATH`.
4. Install [Golang](https://golang.org/) and make sure this code is checked out into your `GOPATH`.
5. Install [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/) and make sure it's on your `PATH`.
6. `cd test`
7. `dep ensure`
8. `go test -timeout 60m -v -run TestEKSCluster`
9. if execution without error, the output like below
```
...
agent.go:114: Generating SSH Agent with given KeyPair(s)
agent.go:68: could not serve ssh agent read unix /var/folders/mg/yc74r0qs0g58wnt0q1_4t88h0000gn/T/ssh-agent-881464729/ssh_auth.sock->: use of closed network connection
PASS
ok github.com/vishwakarma/test 2046.234s
```
4 changes: 0 additions & 4 deletions examples/eks-cluster/main.tf
Original file line number Diff line number Diff line change
Expand Up @@ -45,7 +45,6 @@ module "worker_on_demand" {

cluster_name = "${local.cluster_name}"
aws_region = "${var.aws_region}"
kubernetes_version = "${var.kubernetes_version}"

security_group_ids = ["${module.eks.worker_sg_id}"]
subnet_ids = ["${module.network.private_subnet_ids}"]
Expand All @@ -64,7 +63,6 @@ module "worker_on_demand" {
spot_instance_pools = 1
}

s3_bucket = "${module.eks.s3_bucket}"
ssh_key = "${var.key_pair_name}"

extra_tags = "${merge(map(
Expand All @@ -82,7 +80,6 @@ module "worker_spot" {

cluster_name = "${local.cluster_name}"
aws_region = "${var.aws_region}"
kubernetes_version = "${var.kubernetes_version}"

security_group_ids = ["${module.eks.worker_sg_id}"]
subnet_ids = ["${module.network.private_subnet_ids}"]
Expand All @@ -101,7 +98,6 @@ module "worker_spot" {
spot_instance_pools = 1
}

s3_bucket = "${module.eks.s3_bucket}"
ssh_key = "${var.key_pair_name}"

extra_tags = "${merge(map(
Expand Down
7 changes: 7 additions & 0 deletions examples/eks-cluster/outputs.tf
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
output "bastion_public_ip" {
value = "${module.network.bastion_public_ip}"
}

output "ignition_s3_bucket" {
value = "${module.eks.s3_bucket}"
}
3 changes: 1 addition & 2 deletions examples/elastikube-cluster/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -44,8 +44,8 @@ money.
5. When you're done, execute below command to destroy

```
$ terraform destroy -target=module.worker_on_demand
$ terraform destroy -target=module.worker_spot
$ terraform destroy -target=module.worker_general
$ terraform destroy -target=module.kubernetes
$ terraform destroy -target=module.network
```
Expand All @@ -55,7 +55,6 @@ money.

## Running automated tests against this module

0. Perform the testing in the AWS EC2, instead of desktop, due to this module create many AWS resource, it takes time to destroy. Executing in the AWS EC2 decrease the risk to fail to teardown
1. Sign up for [AWS](https://aws.amazon.com/).
2. Configure your AWS credentials using one of the [supported methods for AWS CLI
tools](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-started.html), such as setting the
Expand Down
6 changes: 3 additions & 3 deletions examples/elastikube-cluster/main.tf
Original file line number Diff line number Diff line change
Expand Up @@ -76,7 +76,7 @@ module "kubernetes" {
module "worker_on_demand" {
source = "../../modules/aws/kube-worker"

name = "${local.cluster_name}"
cluster_name = "${local.cluster_name}"
aws_region = "${var.aws_region}"
kubernetes_version = "${local.kubernetes_version}"
kube_service_cidr = "${var.service_cidr}"
Expand All @@ -85,7 +85,7 @@ module "worker_on_demand" {
subnet_ids = ["${module.network.private_subnet_ids}"]

worker_config = {
name = "on_demand"
name = "on-demand"
instance_count = "2"
ec2_type_1 = "t3.medium"
ec2_type_2 = "t2.medium"
Expand Down Expand Up @@ -114,7 +114,7 @@ module "worker_on_demand" {
module "worker_spot" {
source = "../../modules/aws/kube-worker"

name = "${local.cluster_name}"
cluster_name = "${local.cluster_name}"
aws_region = "${var.aws_region}"
kubernetes_version = "${local.kubernetes_version}"
kube_service_cidr = "${var.service_cidr}"
Expand Down
Loading

0 comments on commit 6e6519f

Please sign in to comment.