Skip to content

Commit

Permalink
chore: Remove terratest workflow and correct documentation links (aws…
Browse files Browse the repository at this point in the history
  • Loading branch information
bryantbiggs authored and Gumar Minibaev committed Mar 17, 2023
1 parent 8aab4c1 commit dd6645a
Show file tree
Hide file tree
Showing 24 changed files with 44 additions and 2,869 deletions.
54 changes: 0 additions & 54 deletions .github/workflows/e2e-terratest.yml

This file was deleted.

File renamed without changes.
File renamed without changes.
185 changes: 6 additions & 179 deletions README.md

Large diffs are not rendered by default.

2 changes: 0 additions & 2 deletions docs/add-ons/aws-fsx-csi-driver.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,8 +3,6 @@
Fully managed shared storage built on the world's most popular high-performance file system.
This add-on deploys the [Amazon FSx for Lustre CSI Driver](https://aws.amazon.com/fsx/lustre/) into an EKS cluster.

Checkout the [examples](https://github.com/aws-ia/terraform-aws-eks-blueprints/tree/main/examples/analytics/emr-eks-fsx-lustre) of using FSx for Lustre with EMR on EKS Spark Jobs.

## Usage

The [Amazon FSx for Lustre CSI Driver](https://github.com/aws-ia/terraform-aws-eks-blueprints/tree/main/modules/kubernetes-addons/aws-fsx-csi-driver) can be deployed by enabling the add-on via the following.
Expand Down
2 changes: 1 addition & 1 deletion docs/add-ons/calico.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ Calico is a widely adopted, battle-tested open source networking and network sec
Calico provides two major services for Cloud Native applications: network connectivity between workloads and network security policy enforcement between workloads.
[Calico](https://projectcalico.docs.tigera.io/getting-started/kubernetes/helm#download-the-helm-chart) docs chart bootstraps Calico infrastructure on a Kubernetes cluster using the Helm package manager.

For complete project documentation, please visit the [Calico documentation site](https://www.tigera.io/calico-documentation/).
For complete project documentation, please visit the [Calico documentation site](https://docs.tigera.io/calico/next/about/).

## Usage

Expand Down
4 changes: 2 additions & 2 deletions docs/add-ons/cilium.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,8 +5,8 @@ Cilium is open source software for transparently securing the network connectivi
Cilium can be set up in two manners:
- In combination with the `Amazon VPC CNI plugin`. In this hybrid mode, the AWS VPC CNI plugin is responsible for setting up the virtual network devices as well as for IP address management (IPAM) via ENIs.
After the initial networking is setup for a given pod, the Cilium CNI plugin is called to attach eBPF programs to the network devices set up by the AWS VPC CNI plugin in order to enforce network policies, perform load-balancing and provide encryption.
Read the installation instruction [here](https://docs.cilium.io/en/stable/gettingstarted/cni-chaining-aws-cni/#chaining-aws-cni)
- As a replacement of `Amazon VPC CNI`, read the complete installation guideline [here](https://docs.cilium.io/en/stable/gettingstarted/k8s-install-helm/)
Read the installation instruction [here](https://docs.cilium.io/en/latest/installation/cni-chaining-aws-cni/)
- As a replacement of `Amazon VPC CNI`, read the complete installation guideline [here](https://docs.cilium.io/en/latest/installation/k8s-install-helm/)

For complete project documentation, please visit the [Cilium documentation site](https://docs.cilium.io/en/stable/).

Expand Down
2 changes: 0 additions & 2 deletions docs/add-ons/crossplane.md
Original file line number Diff line number Diff line change
Expand Up @@ -103,5 +103,3 @@ crossplane_helm_provider = {
enable = true
}
```

Checkout the full [example](https://github.com/aws-ia/terraform-aws-eks-blueprints/blob/main/examples/crossplane) to deploy Crossplane with `kubernetes-addons` module
2 changes: 0 additions & 2 deletions docs/core-concepts.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,8 +14,6 @@ This document provides a high level overview of the Core Concepts that are embed

A `cluster` is simply an EKS cluster. EKS Blueprints provides for customizing the compute options you leverage with your `clusters`. The framework currently supports `EC2`, `Fargate` and `BottleRocket` instances. It also supports managed and self-managed node groups. To specify the type of compute you want to use for your `cluster`, you use the `managed_node_groups`, `self_managed_nodegroups`, or `fargate_profiles` variables.

See our [Node Groups](https://aws-ia.github.io/terraform-aws-eks-blueprints/main/node-groups/) documentation and our [Node Group example directory](https://github.com/aws-ia/terraform-aws-eks-blueprints/tree/main/examples/node-groups) for detailed information.

## Add-on

`Add-ons` allow you to configure the operational tools that you would like to deploy into your EKS cluster. When you configure `add-ons` for a `cluster`, the `add-ons` will be provisioned at deploy time by leveraging the Terraform Helm provider. Add-ons can deploy both Kubernetes specific resources and AWS resources needed to support add-on functionality.
Expand Down
126 changes: 33 additions & 93 deletions docs/getting-started.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,128 +10,68 @@ First, ensure that you have installed the following tools locally.
2. [kubectl](https://Kubernetes.io/docs/tasks/tools/)
3. [terraform](https://learn.hashicorp.com/tutorials/terraform/install-cli)

## Deployment Steps
## Examples

The following steps will walk you through the deployment of an [example blueprint](https://github.com/aws-ia/terraform-aws-eks-blueprints/blob/main/examples/eks-cluster-with-new-vpc/main.tf). This example will deploy a new VPC, a private EKS cluster with public and private subnets, and one managed node group that will be placed in the private subnets. The example will also deploy the following add-ons into the EKS cluster:
Select an example from the [`examples/`](https://github.com/aws-ia/terraform-aws-eks-blueprints/tree/main/examples) directory and follow the instructions in its respective README.md file. The deployment steps for examples generally follow the deploy, validate, and clean-up steps shown below.

- AWS Load Balancer Controller
- Cluster Autoscaler
- CoreDNS
- kube-proxy
- Metrics Server
- vpc-cni
### Deploy

### Clone the repo

```sh
git clone https://github.com/aws-ia/terraform-aws-eks-blueprints.git
```

### Terraform INIT

CD into the example directory:

```sh
cd examples/eks-cluster-with-new-vpc/
```

Initialize the working directory with the following:
To provision this example:

```sh
terraform init
```

### Terraform PLAN

Verify the resources that will be created by this execution:

```sh
terraform plan
```

### Terraform APPLY

We will leverage Terraform's [target](https://learn.hashicorp.com/tutorials/terraform/resource-targeting?in=terraform/cli) functionality to deploy a VPC, an EKS Cluster, and Kubernetes add-ons in separate steps.

**Deploy the VPC**. This step will take roughly 3 minutes to complete.

```sh
terraform apply -target="module.vpc"
```

**Deploy the EKS cluster**. This step will take roughly 14 minutes to complete.

```sh
terraform apply -target="module.eks_blueprints"
```

**Deploy the add-ons**. This step will take rough 5 minutes to complete.

```sh
terraform apply -target module.vpc
terraform apply -target module.eks
terraform apply
```

## Configure kubectl
Enter `yes` at command prompt to apply

Terraform output will display a command in your console that you can use to bootstrap your local `kubeconfig`.
### Validate

```sh
configure_kubectl = "aws eks --region <region> update-kubeconfig --name <cluster-name>"
```
The following command will update the `kubeconfig` on your local machine and allow you to interact with your EKS Cluster using `kubectl` to validate the CoreDNS deployment for Fargate.

Run the command in your terminal.
1. Run `update-kubeconfig` command:

```sh
aws eks --region <region> update-kubeconfig --name <cluster-name>
aws eks --region <REGION> update-kubeconfig --name <CLUSTER_NAME>
```

## Validation

### List worker nodes
3. View the pods that were created:

```sh
kubectl get nodes
```
kubectl get pods -A

You should see output similar to the following:

```
NAME STATUS ROLES AGE VERSION
ip-10-0-10-161.us-west-2.compute.internal Ready <none> 4h18m v1.21.5-eks-9017834
ip-10-0-11-171.us-west-2.compute.internal Ready <none> 4h18m v1.21.5-eks-9017834
ip-10-0-12-48.us-west-2.compute.internal Ready <none> 4h18m v1.21.5-eks-9017834
# Output should show some pods running
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-66b965946d-gd59n 1/1 Running 0 92s
kube-system coredns-66b965946d-tsjrm 1/1 Running 0 92s
kube-system ebs-csi-controller-57cb869486-bcm9z 6/6 Running 0 90s
kube-system ebs-csi-controller-57cb869486-xw4z4 6/6 Running 0 90s
```

### List pods
3. View the nodes that were created:

```sh
kubectl get pods -n kube-system
```

You should see output similar to the following:
kubectl get nodes

```
NAME READY STATUS RESTARTS AGE
aws-load-balancer-controller-954746b57-k9lhc 1/1 Running 1 15m
aws-load-balancer-controller-954746b57-q5gh4 1/1 Running 1 15m
aws-node-jlnkd 1/1 Running 1 15m
aws-node-k86pv 1/1 Running 0 12m
aws-node-kjcdg 1/1 Running 1 14m
cluster-autoscaler-aws-cluster-autoscaler-5d4446b58-d6frd 1/1 Running 1 15m
coredns-85d5b4454c-jksbw 1/1 Running 1 24m
coredns-85d5b4454c-x7wwd 1/1 Running 1 24m
kube-proxy-92slm 1/1 Running 1 18m
kube-proxy-bz5kb 1/1 Running 1 18m
kube-proxy-zl7cj 1/1 Running 1 18m
metrics-server-694d47d564-hzd8h 1/1 Running 1 15m
# Output should show some nodes running
NAME STATUS ROLES AGE VERSION
fargate-ip-10-0-10-11.us-west-2.compute.internal Ready <none> 8m7s v1.24.8-eks-a1bebd3
fargate-ip-10-0-10-210.us-west-2.compute.internal Ready <none> 2m50s v1.24.8-eks-a1bebd3
fargate-ip-10-0-10-218.us-west-2.compute.internal Ready <none> 8m6s v1.24.8-eks-a1bebd3
fargate-ip-10-0-10-227.us-west-2.compute.internal Ready <none> 8m8s v1.24.8-eks-a1bebd3
fargate-ip-10-0-10-42.us-west-2.compute.internal Ready <none> 8m6s v1.24.8-eks-a1bebd3
fargate-ip-10-0-10-71.us-west-2.compute.internal Ready <none> 2m48s v1.24.8-eks-a1bebd3
```

## Cleanup
### Destroy

To clean up your environment, destroy the Terraform modules in reverse order.
To teardown and remove the resources created in this example:

```sh
kubectl delete deployment inflate
terraform destroy -target="module.eks_blueprints_kubernetes_addons" -auto-approve
terraform destroy -target="module.eks_blueprints" -auto-approve
terraform destroy -target="module.eks" -auto-approve
terraform destroy -auto-approve
```
4 changes: 2 additions & 2 deletions docs/internal/ci.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# E2E tests

We use GitHub Actions to run an end-to-end tests to verify all PRs. The GitHub Actions used are a combination of `aws-actions/configure-aws-credentials` and `hashicorp/setup-terraform@v1`. See the complete action definition [here](https://github.com/aws-ia/terraform-aws-eks-blueprints/blob/main/.github/workflows/e2e-terratest.yml).
We use GitHub Actions to run an end-to-end tests to verify all PRs. The GitHub Actions used are a combination of `aws-actions/configure-aws-credentials` and `hashicorp/setup-terraform@v1`.

## Setup

Expand Down Expand Up @@ -58,4 +58,4 @@ Outputs:

3. Setup a GitHub repo secret called `ROLE_TO_ASSUME` and set it to ARN of the role created in 1.

4. We use an S3 backend to test the canonical [example](https://github.com/aws-ia/terraform-aws-eks-blueprints/blob/main/examples/eks-cluster-with-new-vpc/main.tf). This allows us to recover from any failures during the `apply` stage. If you are setting up your own CI pipeline change the s3 bucket name in backend configuration of the example.
4. We use an S3 backend for the e2e tests. This allows us to recover from any failures during the `apply` stage. If you are setting up your own CI pipeline change the s3 bucket name in backend configuration of the example.
Binary file removed images/EKS_private_cluster.jpg
Binary file not shown.
Binary file removed images/Ray-Dashboard.png
Binary file not shown.
Binary file removed images/Ray-Grafana.png
Binary file not shown.
2 changes: 0 additions & 2 deletions modules/aws-eks-fargate-profiles/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,8 +4,6 @@

The Fargate profile allows you to declare which pods run on Fargate for Amazon EKS Cluster. This declaration is done through the profile’s selectors. Each profile can have up to five selectors that contain a namespace and optional labels. You must define a namespace for every selector. The label field consists of multiple optional key-value pairs

Checkout the usage docs for Fargate Profiles [examples](https://aws-ia.github.io/terraform-aws-eks-blueprints/latest/node-groups/)

<!-- BEGINNING OF PRE-COMMIT-TERRAFORM DOCS HOOK -->
## Requirements

Expand Down
2 changes: 0 additions & 2 deletions modules/aws-eks-managed-node-groups/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,8 +12,6 @@ _NOTE_:
- You can create self-managed nodes in an AWS Region where you have AWS Outposts, AWS Wavelength, or AWS Local Zones enabled
- You should not set to true both `create_launch_template` and `remote_access` or you'll end-up with new managed nodegroups that won't be able to join the cluster.

Checkout the usage docs for Managed Node groups [examples](https://aws-ia.github.io/terraform-aws-eks-blueprints/latest/node-groups/)

<!-- BEGINNING OF PRE-COMMIT-TERRAFORM DOCS HOOK -->
## Requirements

Expand Down
2 changes: 0 additions & 2 deletions modules/aws-eks-self-managed-node-groups/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,8 +6,6 @@ Amazon EKS Self Managed Node Groups lets you create, update, scale, and terminat

This module allows you to create on-demand or spot self managed Linux or Windows nodegroups. You can instantiate the module once with map of node group values to create multiple self managed node groups. By default, the module uses the latest available version of Amazon-provided EKS-optimized AMIs for Amazon Linux 2, Bottlerocket, or Windows 2019 Server Core operating systems. You can override the image via the custom_ami_id input variable.

Checkout the usage docs for Self-managed Node groups [examples](https://aws-ia.github.io/terraform-aws-eks-blueprints/latest/node-groups/)

<!-- BEGINNING OF PRE-COMMIT-TERRAFORM DOCS HOOK -->
## Requirements

Expand Down
33 changes: 0 additions & 33 deletions test/README.md

This file was deleted.

Loading

0 comments on commit dd6645a

Please sign in to comment.