Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
20 changes: 5 additions & 15 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -47,9 +47,9 @@ You can find more about this project in Anton Babenko stream:
- [Why you should use this boilerplate](#why-you-should-use-this-boilerplate)
- [Description](#description)
- [Table of contents](#table-of-contents)
- [FAQ: Frequently Asked Questions](#faq-frequently-asked-questions)
- [Architecture diagram](#architecture-diagram)
- [Current infrastructure cost](#current-infrastructure-cost)
- [EKS Upgrading](#eks-upgrading)
- [Namespace structure in the K8S cluster](#namespace-structure-in-the-k8s-cluster)
- [Useful tools](#useful-tools)
- [Useful VSCode extensions](#useful-vscode-extensions)
Expand Down Expand Up @@ -77,6 +77,10 @@ You can find more about this project in Anton Babenko stream:
- [TFSEC](#tfsec)
- [Contributing](#contributing)

## FAQ: Frequently Asked Questions

[FAQ](docs/FAQ.md): Frequently Asked Questions

## Architecture diagram

![aws-base-diagram](docs/aws-base-diagrams-Infrastracture-v6.svg)
Expand Down Expand Up @@ -124,20 +128,6 @@ This diagram describes the default infrastructure:
| | | | | Total | 216.8 |

> The cost is indicated without counting the amount of traffic for Nat Gateway Load Balancer and S3

## EKS Upgrading
To upgrade k8s cluster to a new version, please use [official guide](https://docs.aws.amazon.com/eks/latest/userguide/update-cluster.html) and check changelog/breaking changes.
Starting from v1.18 EKS supports K8S add-ons. We use them to update things like vpc-cni, kube-proxy, coredns. To get the latest add-ons versions, run:
```bash
aws eks describe-addon-versions --kubernetes-version 1.21 --query 'addons[].[addonName, addonVersions[0].addonVersion]'
```
where 1.21 - is a k8s version on which we are updating.
DO NOT FORGET!!! to update cluster-autoscaler too. It's version must be the same as the cluster version.
Also ***IT'S VERY RECOMMENDED*** to check that deployed objects have actual apiVersions that won't be deleted after upgrading. There is a tool [*pluto*](https://github.com/FairwindsOps/pluto) that can help to do it.
```bash
Switch to the correct cluster
Run `pluto detect-helm -o markdown --target-versions k8s=v1.22.0`, where `k8s=v1.22.0` is a k8s version we want to update to.
```
## Namespace structure in the K8S cluster

![aws-base-namespaces](docs/aws-base-diagrams-Namespaces-v3.svg)
Expand Down
154 changes: 154 additions & 0 deletions docs/FAQ.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,154 @@
## EKS Upgrading
To upgrade k8s cluster to a new version, please use [official guide](https://docs.aws.amazon.com/eks/latest/userguide/update-cluster.html) and check changelog/breaking changes.
Starting from v1.18 EKS supports K8S add-ons. We use them to update things like vpc-cni, kube-proxy, coredns. To get the latest add-ons versions, run:
```bash
aws eks describe-addon-versions --kubernetes-version 1.21 --query 'addons[].[addonName, addonVersions[0].addonVersion]'
```
where `1.21` - is a k8s version on which we are updating.
DO NOT FORGET!!! to update cluster-autoscaler too. Its version must be the same as the cluster version.
Also ***IT'S VERY RECOMMENDED*** to check that deployed objects have actual apiVersions that won't be deleted after upgrading. There is a tool [*pluto*](https://github.com/FairwindsOps/pluto) that can help to do it.
```bash
Switch to the correct cluster
Run `pluto detect-helm -o markdown --target-versions k8s=v1.22.0`, where `k8s=v1.22.0` is a k8s version we want to update to.
```

## K8S namespace features:
We strongly recommend using our terraform module `kubernetes-namespace` to manage (create) k8s namespaces. It provides additional functionalities.
* **LimitRange**: By default, containers run with unbounded compute resources on a Kubernetes cluster. This module has a policy [**LimitRange**](https://kubernetes.io/docs/concepts/policy/limit-range/) to constrain resource allocations (to Pods or Containers) in a namespace. Default value is:
```
{
type = "Container"
default = {
cpu = "150m"
memory = "128Mi"
}
default_request = {
cpu = "100m"
memory = "64Mi"
}
}
```
If you don't specify requests or limits for containers these default values will be applied.

* **ResourceQuota**: When several users or teams share a cluster with a fixed number of nodes, there is a concern that one team could use more than its fair share of resources. Using this module you can define [**ResourceQuota**](https://kubernetes.io/docs/concepts/policy/resource-quotas/) to provide constraints that limit aggregate resource consumption per namespace. It can limit the quantity of objects that can be created in a namespace by type, as well as the total amount of compute resources that may be consumed by resources in that namespace. Default value is empty (No any resource quotas)

* **NetworkPolicy**: If you want to control traffic flow at the IP address or port level (OSI layer 3 or 4), then you might consider using Kubernetes NetworkPolicies for particular applications in your cluster. [**NetworkPolicies**](https://kubernetes.io/docs/concepts/services-networking/network-policies/) are an application-centric construct which allow you to specify how a pod is allowed to communicate with various network "entities" (we use the word "entity" here to avoid overloading the more common terms such as "endpoints" and "services", which have specific Kubernetes connotations) over the network.

The entities that a Pod can communicate with are identified through a combination of the following 3 identifiers:

Other pods that are allowed (exception: a pod cannot block access to itself)
Namespaces that are allowed
IP blocks (exception: traffic to and from the node where a Pod is running is always allowed, regardless of the IP address of the Pod or the node)
Default value is empty (No any NetworkPolicies - all traffic is allowed)

Example of configuring namespace LimitRange, ResourceQuota and NetworkPolicy:
```
module "test_namespace" {
source = "../modules/kubernetes-namespace"
name = "test"
limits = [
{
type = "Container"
default = {
cpu = "200m"
memory = "64Mi"
}
default_request = {
cpu = "100m"
memory = "32Mi"
}
max = {
cpu = "2"
}
},
{
type = "Pod"
max = {
cpu = "4"
}
}
]
resource_quotas = [
{
name = "compute-resources"
hard = {
"requests.cpu" = 1
"requests.memory" = "1Gi"
"limits.cpu" = 2
"limits.memory" = "2Gi"
}
scope_selector = {
scope_name = "PriorityClass"
operator = "NotIn"
values = ["high"]
}
},
{
name = "object-counts"
hard = {
configmaps = 10
persistentvolumeclaims = 4
pods = 4
replicationcontrollers = 20
secrets = 10
services = 10
"services.loadbalancers" = 2
}
}
]
network_policies = [
{
name = "allow-this-namespace"
policy_types = ["Ingress"]
ingress = {
from = [
{
namespace_selector = {
match_labels = {
name = "test"
}
}
}
]
}
},
{
name = "allow-from-ingress-namespace"
policy_types = ["Ingress"]
ingress = {
from = [
{
namespace_selector = {
match_labels = {
name = "ing"
}
}
}
]
}
},
{
name = "allow-egress-to-dev"
policy_type = ["Egress"]
egress = {
ports = [
{
port = "80"
protocol = "TCP"
}
]
to = [
{
namespace_selector = {
match_labels = {
name = "dev"
}
}
}
]
}
}
]
}
```

2 changes: 1 addition & 1 deletion terraform/layer2-k8s/eks-aws-node-termination-handler.tf
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@ resource "helm_release" "aws_node_termination_handler" {
chart = "aws-node-termination-handler"
version = var.aws_node_termination_handler_version
repository = local.helm_repo_eks
namespace = kubernetes_namespace.sys.id
namespace = module.sys_namespace.name
wait = false
max_history = var.helm_release_history_size

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -15,12 +15,3 @@ resource "helm_release" "calico_daemonset" {
data.template_file.calico_daemonset.rendered,
]
}

#tfsec:ignore:kubernetes-network-no-public-egress tfsec:ignore:kubernetes-network-no-public-ingress
module "dev_ns_network_policy" {
source = "../modules/kubernetes-network-policy-namespace"
namespace = kubernetes_namespace.dev.metadata[0].name
allow_from_namespaces = [module.ing_namespace.labels_name]

depends = [helm_release.calico_daemonset]
}
9 changes: 4 additions & 5 deletions terraform/layer2-k8s/eks-cert-manager.tf
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ resource "helm_release" "cert_manager" {
name = "cert-manager"
chart = "cert-manager"
repository = local.helm_repo_certmanager
namespace = kubernetes_namespace.certmanager.id
namespace = module.certmanager_namespace.name
version = var.cert_manager_version
wait = true
max_history = var.helm_release_history_size
Expand All @@ -20,10 +20,9 @@ resource "helm_release" "cert_manager" {
]
}

resource "kubernetes_namespace" "certmanager" {
metadata {
name = "certmanager"
}
module "certmanager_namespace" {
source = "../modules/kubernetes-namespace"
name = "certmanager"
}

#tfsec:ignore:aws-iam-no-policy-wildcards
Expand Down
2 changes: 1 addition & 1 deletion terraform/layer2-k8s/eks-cluster-autoscaler.tf
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ resource "helm_release" "cluster_autoscaler" {
chart = "cluster-autoscaler"
repository = local.helm_repo_cluster_autoscaler
version = var.cluster_autoscaler_chart_version
namespace = kubernetes_namespace.sys.id
namespace = module.sys_namespace.name
max_history = var.helm_release_history_size

values = [
Expand Down
4 changes: 2 additions & 2 deletions terraform/layer2-k8s/eks-cluster-issuer.tf
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ data "template_file" "cluster_issuer" {
resource "helm_release" "cluster_issuer" {
name = "cluster-issuer"
chart = "../../helm-charts/cluster-issuer"
namespace = kubernetes_namespace.certmanager.id
namespace = module.certmanager_namespace.name
wait = false
max_history = var.helm_release_history_size

Expand All @@ -20,5 +20,5 @@ resource "helm_release" "cluster_issuer" {
]

# This dep needs for correct apply
depends_on = [helm_release.cert_manager, kubernetes_namespace.certmanager]
depends_on = [helm_release.cert_manager]
}
2 changes: 1 addition & 1 deletion terraform/layer2-k8s/eks-external-dns.tf
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ resource "helm_release" "external_dns" {
chart = "external-dns"
repository = local.helm_repo_bitnami
version = var.external_dns_version
namespace = kubernetes_namespace.dns.id
namespace = module.dns_namespace.name
max_history = var.helm_release_history_size

values = [
Expand Down
4 changes: 2 additions & 2 deletions terraform/layer2-k8s/eks-external-secrets.tf
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ resource "helm_release" "external_secrets" {
chart = "kubernetes-external-secrets"
repository = local.helm_repo_external_secrets
version = var.external_secrets_version
namespace = kubernetes_namespace.sys.id
namespace = module.sys_namespace.name
max_history = var.helm_release_history_size

values = [
Expand All @@ -25,7 +25,7 @@ resource "helm_release" "reloader" {
chart = "reloader"
repository = local.helm_repo_stakater
version = var.reloader_version
namespace = kubernetes_namespace.sys.id
namespace = module.sys_namespace.name
wait = false
max_history = var.helm_release_history_size
}
Expand Down
2 changes: 1 addition & 1 deletion terraform/layer2-k8s/eks-kube-prometheus-stack.tf
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,7 @@ resource "helm_release" "prometheus_operator" {
name = "kube-prometheus-stack"
chart = "kube-prometheus-stack"
repository = local.helm_repo_prometheus_community
namespace = kubernetes_namespace.monitoring.id
namespace = module.monitoring_namespace.name
version = var.prometheus_operator_version
wait = false
max_history = var.helm_release_history_size
Expand Down
2 changes: 1 addition & 1 deletion terraform/layer2-k8s/eks-loki-stack.tf
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ resource "helm_release" "loki_stack" {
name = "loki-stack"
chart = "loki-stack"
repository = local.helm_repo_grafana
namespace = kubernetes_namespace.monitoring.id
namespace = module.monitoring_namespace.name
version = var.loki_stack
wait = false
max_history = var.helm_release_history_size
Expand Down
61 changes: 18 additions & 43 deletions terraform/layer2-k8s/eks-namespaces.tf
Original file line number Diff line number Diff line change
@@ -1,59 +1,34 @@
resource "kubernetes_namespace" "dns" {
metadata {
name = "dns"
}
module "dns_namespace" {
source = "../modules/kubernetes-namespace"
name = "dns"
}

module "ing_namespace" {
source = "../modules/kubernetes-namespace"
name = "ing"
}

resource "kubernetes_namespace" "elk" {
metadata {
name = "elk"
}
}

resource "kubernetes_namespace" "prod" {
metadata {
name = "prod"
}
}

resource "kubernetes_namespace" "staging" {
metadata {
name = "staging"
}
}

resource "kubernetes_namespace" "dev" {
metadata {
name = "dev"
}
module "elk_namespace" {
source = "../modules/kubernetes-namespace"
name = "elk"
}

resource "kubernetes_namespace" "fargate" {
metadata {
name = "fargate"
}
module "fargate_namespace" {
source = "../modules/kubernetes-namespace"
name = "fargate"
}

resource "kubernetes_namespace" "ci" {
metadata {
name = "ci"
}
module "ci_namespace" {
source = "../modules/kubernetes-namespace"
name = "ci"
}

resource "kubernetes_namespace" "sys" {
metadata {
name = "sys"
}
module "sys_namespace" {
source = "../modules/kubernetes-namespace"
name = "sys"
}

resource "kubernetes_namespace" "monitoring" {
metadata {
name = "monitoring"
}
module "monitoring_namespace" {
source = "../modules/kubernetes-namespace"
name = "monitoring"
}

Loading