diff --git a/README.md b/README.md index d474f107..868d56ec 100644 --- a/README.md +++ b/README.md @@ -47,9 +47,9 @@ You can find more about this project in Anton Babenko stream: - [Why you should use this boilerplate](#why-you-should-use-this-boilerplate) - [Description](#description) - [Table of contents](#table-of-contents) + - [FAQ: Frequently Asked Questions](#faq-frequently-asked-questions) - [Architecture diagram](#architecture-diagram) - [Current infrastructure cost](#current-infrastructure-cost) - - [EKS Upgrading](#eks-upgrading) - [Namespace structure in the K8S cluster](#namespace-structure-in-the-k8s-cluster) - [Useful tools](#useful-tools) - [Useful VSCode extensions](#useful-vscode-extensions) @@ -77,6 +77,10 @@ You can find more about this project in Anton Babenko stream: - [TFSEC](#tfsec) - [Contributing](#contributing) +## FAQ: Frequently Asked Questions + +[FAQ](docs/FAQ.md): Frequently Asked Questions + ## Architecture diagram ![aws-base-diagram](docs/aws-base-diagrams-Infrastracture-v6.svg) @@ -124,20 +128,6 @@ This diagram describes the default infrastructure: | | | | | Total | 216.8 | > The cost is indicated without counting the amount of traffic for Nat Gateway Load Balancer and S3 - -## EKS Upgrading -To upgrade k8s cluster to a new version, please use [official guide](https://docs.aws.amazon.com/eks/latest/userguide/update-cluster.html) and check changelog/breaking changes. -Starting from v1.18 EKS supports K8S add-ons. We use them to update things like vpc-cni, kube-proxy, coredns. To get the latest add-ons versions, run: -```bash -aws eks describe-addon-versions --kubernetes-version 1.21 --query 'addons[].[addonName, addonVersions[0].addonVersion]' -``` -where 1.21 - is a k8s version on which we are updating. -DO NOT FORGET!!! to update cluster-autoscaler too. It's version must be the same as the cluster version. -Also ***IT'S VERY RECOMMENDED*** to check that deployed objects have actual apiVersions that won't be deleted after upgrading. There is a tool [*pluto*](https://github.com/FairwindsOps/pluto) that can help to do it. -```bash -Switch to the correct cluster -Run `pluto detect-helm -o markdown --target-versions k8s=v1.22.0`, where `k8s=v1.22.0` is a k8s version we want to update to. -``` ## Namespace structure in the K8S cluster ![aws-base-namespaces](docs/aws-base-diagrams-Namespaces-v3.svg) diff --git a/docs/FAQ.md b/docs/FAQ.md new file mode 100644 index 00000000..cc35ed60 --- /dev/null +++ b/docs/FAQ.md @@ -0,0 +1,154 @@ +## EKS Upgrading +To upgrade k8s cluster to a new version, please use [official guide](https://docs.aws.amazon.com/eks/latest/userguide/update-cluster.html) and check changelog/breaking changes. +Starting from v1.18 EKS supports K8S add-ons. We use them to update things like vpc-cni, kube-proxy, coredns. To get the latest add-ons versions, run: +```bash +aws eks describe-addon-versions --kubernetes-version 1.21 --query 'addons[].[addonName, addonVersions[0].addonVersion]' +``` +where `1.21` - is a k8s version on which we are updating. +DO NOT FORGET!!! to update cluster-autoscaler too. Its version must be the same as the cluster version. +Also ***IT'S VERY RECOMMENDED*** to check that deployed objects have actual apiVersions that won't be deleted after upgrading. There is a tool [*pluto*](https://github.com/FairwindsOps/pluto) that can help to do it. +```bash +Switch to the correct cluster +Run `pluto detect-helm -o markdown --target-versions k8s=v1.22.0`, where `k8s=v1.22.0` is a k8s version we want to update to. +``` + +## K8S namespace features: +We strongly recommend using our terraform module `kubernetes-namespace` to manage (create) k8s namespaces. It provides additional functionalities. +* **LimitRange**: By default, containers run with unbounded compute resources on a Kubernetes cluster. This module has a policy [**LimitRange**](https://kubernetes.io/docs/concepts/policy/limit-range/) to constrain resource allocations (to Pods or Containers) in a namespace. Default value is: +``` + { + type = "Container" + default = { + cpu = "150m" + memory = "128Mi" + } + default_request = { + cpu = "100m" + memory = "64Mi" + } + } +``` +If you don't specify requests or limits for containers these default values will be applied. + +* **ResourceQuota**: When several users or teams share a cluster with a fixed number of nodes, there is a concern that one team could use more than its fair share of resources. Using this module you can define [**ResourceQuota**](https://kubernetes.io/docs/concepts/policy/resource-quotas/) to provide constraints that limit aggregate resource consumption per namespace. It can limit the quantity of objects that can be created in a namespace by type, as well as the total amount of compute resources that may be consumed by resources in that namespace. Default value is empty (No any resource quotas) + +* **NetworkPolicy**: If you want to control traffic flow at the IP address or port level (OSI layer 3 or 4), then you might consider using Kubernetes NetworkPolicies for particular applications in your cluster. [**NetworkPolicies**](https://kubernetes.io/docs/concepts/services-networking/network-policies/) are an application-centric construct which allow you to specify how a pod is allowed to communicate with various network "entities" (we use the word "entity" here to avoid overloading the more common terms such as "endpoints" and "services", which have specific Kubernetes connotations) over the network. + +The entities that a Pod can communicate with are identified through a combination of the following 3 identifiers: + +Other pods that are allowed (exception: a pod cannot block access to itself) +Namespaces that are allowed +IP blocks (exception: traffic to and from the node where a Pod is running is always allowed, regardless of the IP address of the Pod or the node) +Default value is empty (No any NetworkPolicies - all traffic is allowed) + +Example of configuring namespace LimitRange, ResourceQuota and NetworkPolicy: +``` +module "test_namespace" { + source = "../modules/kubernetes-namespace" + name = "test" + limits = [ + { + type = "Container" + default = { + cpu = "200m" + memory = "64Mi" + } + default_request = { + cpu = "100m" + memory = "32Mi" + } + max = { + cpu = "2" + } + }, + { + type = "Pod" + max = { + cpu = "4" + } + } + ] + resource_quotas = [ + { + name = "compute-resources" + hard = { + "requests.cpu" = 1 + "requests.memory" = "1Gi" + "limits.cpu" = 2 + "limits.memory" = "2Gi" + } + scope_selector = { + scope_name = "PriorityClass" + operator = "NotIn" + values = ["high"] + } + }, + { + name = "object-counts" + hard = { + configmaps = 10 + persistentvolumeclaims = 4 + pods = 4 + replicationcontrollers = 20 + secrets = 10 + services = 10 + "services.loadbalancers" = 2 + } + } + ] + network_policies = [ + { + name = "allow-this-namespace" + policy_types = ["Ingress"] + ingress = { + from = [ + { + namespace_selector = { + match_labels = { + name = "test" + } + } + } + ] + } + }, + { + name = "allow-from-ingress-namespace" + policy_types = ["Ingress"] + ingress = { + from = [ + { + namespace_selector = { + match_labels = { + name = "ing" + } + } + } + ] + } + }, + { + name = "allow-egress-to-dev" + policy_type = ["Egress"] + egress = { + ports = [ + { + port = "80" + protocol = "TCP" + } + ] + to = [ + { + namespace_selector = { + match_labels = { + name = "dev" + } + } + } + ] + } + } + ] +} +``` + diff --git a/terraform/layer2-k8s/eks-aws-node-termination-handler.tf b/terraform/layer2-k8s/eks-aws-node-termination-handler.tf index 4c0ef612..cefabb73 100644 --- a/terraform/layer2-k8s/eks-aws-node-termination-handler.tf +++ b/terraform/layer2-k8s/eks-aws-node-termination-handler.tf @@ -3,7 +3,7 @@ resource "helm_release" "aws_node_termination_handler" { chart = "aws-node-termination-handler" version = var.aws_node_termination_handler_version repository = local.helm_repo_eks - namespace = kubernetes_namespace.sys.id + namespace = module.sys_namespace.name wait = false max_history = var.helm_release_history_size diff --git a/terraform/layer2-k8s/eks-network-policy.tf b/terraform/layer2-k8s/eks-calico.tf similarity index 54% rename from terraform/layer2-k8s/eks-network-policy.tf rename to terraform/layer2-k8s/eks-calico.tf index 0b305778..1396db8f 100644 --- a/terraform/layer2-k8s/eks-network-policy.tf +++ b/terraform/layer2-k8s/eks-calico.tf @@ -15,12 +15,3 @@ resource "helm_release" "calico_daemonset" { data.template_file.calico_daemonset.rendered, ] } - -#tfsec:ignore:kubernetes-network-no-public-egress tfsec:ignore:kubernetes-network-no-public-ingress -module "dev_ns_network_policy" { - source = "../modules/kubernetes-network-policy-namespace" - namespace = kubernetes_namespace.dev.metadata[0].name - allow_from_namespaces = [module.ing_namespace.labels_name] - - depends = [helm_release.calico_daemonset] -} diff --git a/terraform/layer2-k8s/eks-cert-manager.tf b/terraform/layer2-k8s/eks-cert-manager.tf index 198473c3..6410b30f 100644 --- a/terraform/layer2-k8s/eks-cert-manager.tf +++ b/terraform/layer2-k8s/eks-cert-manager.tf @@ -10,7 +10,7 @@ resource "helm_release" "cert_manager" { name = "cert-manager" chart = "cert-manager" repository = local.helm_repo_certmanager - namespace = kubernetes_namespace.certmanager.id + namespace = module.certmanager_namespace.name version = var.cert_manager_version wait = true max_history = var.helm_release_history_size @@ -20,10 +20,9 @@ resource "helm_release" "cert_manager" { ] } -resource "kubernetes_namespace" "certmanager" { - metadata { - name = "certmanager" - } +module "certmanager_namespace" { + source = "../modules/kubernetes-namespace" + name = "certmanager" } #tfsec:ignore:aws-iam-no-policy-wildcards diff --git a/terraform/layer2-k8s/eks-cluster-autoscaler.tf b/terraform/layer2-k8s/eks-cluster-autoscaler.tf index 7594290b..cae06566 100644 --- a/terraform/layer2-k8s/eks-cluster-autoscaler.tf +++ b/terraform/layer2-k8s/eks-cluster-autoscaler.tf @@ -14,7 +14,7 @@ resource "helm_release" "cluster_autoscaler" { chart = "cluster-autoscaler" repository = local.helm_repo_cluster_autoscaler version = var.cluster_autoscaler_chart_version - namespace = kubernetes_namespace.sys.id + namespace = module.sys_namespace.name max_history = var.helm_release_history_size values = [ diff --git a/terraform/layer2-k8s/eks-cluster-issuer.tf b/terraform/layer2-k8s/eks-cluster-issuer.tf index 07e41c60..3a504197 100644 --- a/terraform/layer2-k8s/eks-cluster-issuer.tf +++ b/terraform/layer2-k8s/eks-cluster-issuer.tf @@ -11,7 +11,7 @@ data "template_file" "cluster_issuer" { resource "helm_release" "cluster_issuer" { name = "cluster-issuer" chart = "../../helm-charts/cluster-issuer" - namespace = kubernetes_namespace.certmanager.id + namespace = module.certmanager_namespace.name wait = false max_history = var.helm_release_history_size @@ -20,5 +20,5 @@ resource "helm_release" "cluster_issuer" { ] # This dep needs for correct apply - depends_on = [helm_release.cert_manager, kubernetes_namespace.certmanager] + depends_on = [helm_release.cert_manager] } diff --git a/terraform/layer2-k8s/eks-external-dns.tf b/terraform/layer2-k8s/eks-external-dns.tf index f7a4f436..29d54715 100644 --- a/terraform/layer2-k8s/eks-external-dns.tf +++ b/terraform/layer2-k8s/eks-external-dns.tf @@ -14,7 +14,7 @@ resource "helm_release" "external_dns" { chart = "external-dns" repository = local.helm_repo_bitnami version = var.external_dns_version - namespace = kubernetes_namespace.dns.id + namespace = module.dns_namespace.name max_history = var.helm_release_history_size values = [ diff --git a/terraform/layer2-k8s/eks-external-secrets.tf b/terraform/layer2-k8s/eks-external-secrets.tf index f6f479dd..7ed6626d 100644 --- a/terraform/layer2-k8s/eks-external-secrets.tf +++ b/terraform/layer2-k8s/eks-external-secrets.tf @@ -12,7 +12,7 @@ resource "helm_release" "external_secrets" { chart = "kubernetes-external-secrets" repository = local.helm_repo_external_secrets version = var.external_secrets_version - namespace = kubernetes_namespace.sys.id + namespace = module.sys_namespace.name max_history = var.helm_release_history_size values = [ @@ -25,7 +25,7 @@ resource "helm_release" "reloader" { chart = "reloader" repository = local.helm_repo_stakater version = var.reloader_version - namespace = kubernetes_namespace.sys.id + namespace = module.sys_namespace.name wait = false max_history = var.helm_release_history_size } diff --git a/terraform/layer2-k8s/eks-kube-prometheus-stack.tf b/terraform/layer2-k8s/eks-kube-prometheus-stack.tf index 0fa613ce..7bdc1a67 100644 --- a/terraform/layer2-k8s/eks-kube-prometheus-stack.tf +++ b/terraform/layer2-k8s/eks-kube-prometheus-stack.tf @@ -30,7 +30,7 @@ resource "helm_release" "prometheus_operator" { name = "kube-prometheus-stack" chart = "kube-prometheus-stack" repository = local.helm_repo_prometheus_community - namespace = kubernetes_namespace.monitoring.id + namespace = module.monitoring_namespace.name version = var.prometheus_operator_version wait = false max_history = var.helm_release_history_size diff --git a/terraform/layer2-k8s/eks-loki-stack.tf b/terraform/layer2-k8s/eks-loki-stack.tf index e1a594ad..935b34ca 100644 --- a/terraform/layer2-k8s/eks-loki-stack.tf +++ b/terraform/layer2-k8s/eks-loki-stack.tf @@ -15,7 +15,7 @@ resource "helm_release" "loki_stack" { name = "loki-stack" chart = "loki-stack" repository = local.helm_repo_grafana - namespace = kubernetes_namespace.monitoring.id + namespace = module.monitoring_namespace.name version = var.loki_stack wait = false max_history = var.helm_release_history_size diff --git a/terraform/layer2-k8s/eks-namespaces.tf b/terraform/layer2-k8s/eks-namespaces.tf index 18309693..0eb8fd2d 100644 --- a/terraform/layer2-k8s/eks-namespaces.tf +++ b/terraform/layer2-k8s/eks-namespaces.tf @@ -1,7 +1,6 @@ -resource "kubernetes_namespace" "dns" { - metadata { - name = "dns" - } +module "dns_namespace" { + source = "../modules/kubernetes-namespace" + name = "dns" } module "ing_namespace" { @@ -9,51 +8,27 @@ module "ing_namespace" { name = "ing" } -resource "kubernetes_namespace" "elk" { - metadata { - name = "elk" - } -} - -resource "kubernetes_namespace" "prod" { - metadata { - name = "prod" - } -} - -resource "kubernetes_namespace" "staging" { - metadata { - name = "staging" - } -} - -resource "kubernetes_namespace" "dev" { - metadata { - name = "dev" - } +module "elk_namespace" { + source = "../modules/kubernetes-namespace" + name = "elk" } -resource "kubernetes_namespace" "fargate" { - metadata { - name = "fargate" - } +module "fargate_namespace" { + source = "../modules/kubernetes-namespace" + name = "fargate" } -resource "kubernetes_namespace" "ci" { - metadata { - name = "ci" - } +module "ci_namespace" { + source = "../modules/kubernetes-namespace" + name = "ci" } -resource "kubernetes_namespace" "sys" { - metadata { - name = "sys" - } +module "sys_namespace" { + source = "../modules/kubernetes-namespace" + name = "sys" } -resource "kubernetes_namespace" "monitoring" { - metadata { - name = "monitoring" - } +module "monitoring_namespace" { + source = "../modules/kubernetes-namespace" + name = "monitoring" } - diff --git a/terraform/layer2-k8s/examples/eks-elk.tf b/terraform/layer2-k8s/examples/eks-elk.tf index 6013a224..10b72a4d 100644 --- a/terraform/layer2-k8s/examples/eks-elk.tf +++ b/terraform/layer2-k8s/examples/eks-elk.tf @@ -23,7 +23,7 @@ data "template_file" "elk" { resource "helm_release" "elk" { name = "elk" chart = "../../helm-charts/elk" - namespace = kubernetes_namespace.elk.id + namespace = module.elk_namespace.name wait = false max_history = var.helm_release_history_size @@ -53,7 +53,7 @@ module "elastic_tls" { name = local.name common_name = "elasticsearch-master" - dns_names = [local.domain_name, "*.${local.domain_name}", "elasticsearch-master", "elasticsearch-master.${kubernetes_namespace.elk.id}", "kibana", "kibana.${kubernetes_namespace.elk.id}", "kibana-kibana", "kibana-kibana.${kubernetes_namespace.elk.id}", "logstash", "logstash.${kubernetes_namespace.elk.id}"] + dns_names = [local.domain_name, "*.${local.domain_name}", "elasticsearch-master", "elasticsearch-master.${module.elk_namespace.name}", "kibana", "kibana.${module.elk_namespace.name}", "kibana-kibana", "kibana-kibana.${module.elk_namespace.name}", "logstash", "logstash.${module.elk_namespace.name}"] validity_period_hours = 8760 early_renewal_hours = 336 } @@ -61,7 +61,7 @@ module "elastic_tls" { resource "kubernetes_secret" "elasticsearch_credentials" { metadata { name = "elastic-credentials" - namespace = kubernetes_namespace.elk.id + namespace = module.elk_namespace.name } data = { @@ -73,7 +73,7 @@ resource "kubernetes_secret" "elasticsearch_credentials" { resource "kubernetes_secret" "elasticsearch_certificates" { metadata { name = "elastic-certificates" - namespace = kubernetes_namespace.elk.id + namespace = module.elk_namespace.name } data = { @@ -86,7 +86,7 @@ resource "kubernetes_secret" "elasticsearch_certificates" { resource "kubernetes_secret" "elasticsearch_s3_user_creds" { metadata { name = "elasticsearch-s3-user-creds" - namespace = kubernetes_namespace.elk.id + namespace = module.elk_namespace.name } data = { @@ -104,7 +104,7 @@ resource "random_string" "elasticsearch_password" { resource "kubernetes_secret" "kibana_enc_key" { metadata { name = "kibana-encryption-key" - namespace = kubernetes_namespace.elk.id + namespace = module.elk_namespace.name } data = { diff --git a/terraform/layer2-k8s/examples/eks-gitlab-runner.tf b/terraform/layer2-k8s/examples/eks-gitlab-runner.tf index 763ba669..2630e690 100644 --- a/terraform/layer2-k8s/examples/eks-gitlab-runner.tf +++ b/terraform/layer2-k8s/examples/eks-gitlab-runner.tf @@ -4,7 +4,7 @@ locals { gitlab_runner_template = templatefile("${path.module}/templates/gitlab-runner-values.tmpl", { registration_token = local.gitlab_registration_token - namespace = kubernetes_namespace.ci.id + namespace = module.ci_namespace.name role_arn = module.aws_iam_gitlab_runner.role_arn runner_sa = module.eks_rbac_gitlab_runner.sa_name bucket_name = local.gitlab_runner_cache_bucket_name @@ -18,7 +18,7 @@ module "eks_rbac_gitlab_runner" { name = "${local.name}-gl" role_arn = module.aws_iam_gitlab_runner.role_arn - namespace = kubernetes_namespace.ci.id + namespace = module.ci_namespace.name } resource "helm_release" "gitlab_runner" { @@ -26,7 +26,7 @@ resource "helm_release" "gitlab_runner" { chart = "gitlab-runner" repository = local.helm_repo_gitlab version = var.gitlab_runner_version - namespace = kubernetes_namespace.ci.id + namespace = module.ci_namespace.name wait = false max_history = var.helm_release_history_size diff --git a/terraform/layer2-k8s/examples/eks-teamcity.tf b/terraform/layer2-k8s/examples/eks-teamcity.tf index 9a66d4e5..04001556 100644 --- a/terraform/layer2-k8s/examples/eks-teamcity.tf +++ b/terraform/layer2-k8s/examples/eks-teamcity.tf @@ -7,7 +7,7 @@ module "eks_rbac_teamcity" { name = "${local.name}-teamcity" role_arn = module.aws_iam_teamcity.role_arn - namespace = kubernetes_namespace.ci.id + namespace = module.ci_namespace.name } data "template_file" "teamcity_agent" { @@ -31,7 +31,7 @@ data "template_file" "teamcity" { resource "helm_release" "teamcity" { name = "teamcity" chart = "../../helm-charts/teamcity" - namespace = kubernetes_namespace.ci.id + namespace = module.ci_namespace.name wait = false cleanup_on_fail = true max_history = var.helm_release_history_size diff --git a/terraform/layer2-k8s/templates/cluster-autoscaler-values.yaml b/terraform/layer2-k8s/templates/cluster-autoscaler-values.yaml index 40ba2c48..aaa95146 100644 --- a/terraform/layer2-k8s/templates/cluster-autoscaler-values.yaml +++ b/terraform/layer2-k8s/templates/cluster-autoscaler-values.yaml @@ -36,3 +36,11 @@ affinity: operator: In values: - ON_DEMAND + +resources: + limits: + cpu: 100m + memory: 512Mi + requests: + cpu: 100m + memory: 320Mi diff --git a/terraform/layer2-k8s/templates/prometheus-values.yaml b/terraform/layer2-k8s/templates/prometheus-values.yaml index 51800b9e..d84df0ee 100644 --- a/terraform/layer2-k8s/templates/prometheus-values.yaml +++ b/terraform/layer2-k8s/templates/prometheus-values.yaml @@ -21,6 +21,13 @@ prometheus: resources: requests: storage: 30Gi + resources: + requests: + cpu: 200m + memory: 1024Mi + limits: + cpu: 400m + memory: 1024Mi affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: diff --git a/terraform/modules/kubernetes-namespace/limitrange.tf b/terraform/modules/kubernetes-namespace/limitrange.tf new file mode 100644 index 00000000..4ff983a8 --- /dev/null +++ b/terraform/modules/kubernetes-namespace/limitrange.tf @@ -0,0 +1,21 @@ +resource "kubernetes_limit_range" "this" { + count = var.enable ? 1 : 0 + + metadata { + name = var.name + namespace = kubernetes_namespace.this[count.index].id + } + spec { + dynamic "limit" { + for_each = var.limits + content { + type = lookup(limit.value, "type", null) + default = lookup(limit.value, "default", null) + default_request = lookup(limit.value, "default_request", null) + max = lookup(limit.value, "max", null) + max_limit_request_ratio = lookup(limit.value, "max_limit_request_ratio", null) + min = lookup(limit.value, "min", null) + } + } + } +} diff --git a/terraform/modules/kubernetes-namespace/main.tf b/terraform/modules/kubernetes-namespace/main.tf index f9643066..2a70ab24 100644 --- a/terraform/modules/kubernetes-namespace/main.tf +++ b/terraform/modules/kubernetes-namespace/main.tf @@ -6,20 +6,11 @@ locals { }, var.labels) } -# this resource is used to provide linkage output.labels -# with the kubernetes_namespace resource -resource "null_resource" "labels" { - triggers = { - name = var.name - } - - depends_on = [kubernetes_namespace.this] -} - resource "kubernetes_namespace" "this" { # option to disable namespace creation # e.g. if you want to create namespace only in specific environment count = var.enable ? 1 : 0 + metadata { annotations = var.annotations labels = local.labels diff --git a/terraform/modules/kubernetes-namespace/network-policy.tf b/terraform/modules/kubernetes-namespace/network-policy.tf new file mode 100644 index 00000000..d34ccae0 --- /dev/null +++ b/terraform/modules/kubernetes-namespace/network-policy.tf @@ -0,0 +1,142 @@ +resource "kubernetes_network_policy" "this" { + count = var.enable && length(var.network_policies) > 0 ? length(var.network_policies) : 0 + + metadata { + name = var.network_policies[count.index].name + namespace = kubernetes_namespace.this[0].id + } + spec { + pod_selector { + dynamic "match_expressions" { + for_each = lookup(var.network_policies[count.index], "pod_selector", null) != null ? [var.network_policies[count.index].pod_selector] : [] + content { + key = lookup(pod_selector.value, "key", null) + operator = lookup(pod_selector.value, "operator", null) + values = lookup(pod_selector.value, "values", null) + } + } + match_labels = lookup(var.network_policies[count.index], "pod_selector", null) != null ? lookup(var.network_policies[count.index].pod_selector, "match_labels") : null + } + + dynamic "ingress" { + for_each = lookup(var.network_policies[count.index], "ingress", null) != null ? [var.network_policies[count.index].ingress] : [] + content { + dynamic "from" { + for_each = lookup(ingress.value, "from", null) != null ? ingress.value.from : [] + content { + + dynamic "namespace_selector" { + for_each = lookup(from.value, "namespace_selector", null) != null ? [from.value.namespace_selector] : [] + content { + match_labels = lookup(namespace_selector.value, "match_labels", null) + dynamic "match_expressions" { + for_each = lookup(namespace_selector.value, "match_expressions", null) != null ? [namespace_selector.value.match_expressions] : [] + content { + key = lookup(match_expressions.value, "key", null) + operator = lookup(match_expressions.value, "operator", null) + values = lookup(match_expressions.value, "values", null) + } + } + } + } + + dynamic "pod_selector" { + for_each = lookup(from.value, "pod_selector", null) != null ? [from.value.pod_selector] : [] + content { + match_labels = lookup(pod_selector.value, "match_labels", null) + dynamic "match_expressions" { + for_each = lookup(pod_selector.value, "match_expressions", null) != null ? [pod_selector.value.match_expressions] : [] + content { + key = lookup(match_expressions.value, "key", null) + operator = lookup(match_expressions.value, "operator", null) + values = lookup(match_expressions.value, "values", null) + } + } + } + } + + dynamic "ip_block" { + for_each = lookup(from.value, "ip_block", null) != null ? [from.value.ip_block] : [] + content { + cidr = lookup(ip_block.value, "cidr", null) + except = lookup(ip_block.value, "except", null) + } + } + + } + } + + dynamic "ports" { + for_each = lookup(ingress.value, "ports", null) != null ? ingress.value.ports : [] + content { + port = ports.value.port + protocol = ports.value.protocol + } + } + + } + } + + dynamic "egress" { + for_each = lookup(var.network_policies[count.index], "egress", null) != null ? [var.network_policies[count.index].egress] : [] + content { + dynamic "to" { + for_each = lookup(egress.value, "to", null) != null ? egress.value.to : [] + content { + + dynamic "namespace_selector" { + for_each = lookup(to.value, "namespace_selector", null) != null ? [to.value.namespace_selector] : [] + content { + match_labels = lookup(namespace_selector.value, "match_labels", null) + dynamic "match_expressions" { + for_each = lookup(namespace_selector.value, "match_expressions", null) != null ? [namespace_selector.value.match_expressions] : [] + content { + key = lookup(match_expressions.value, "key", null) + operator = lookup(match_expressions.value, "operator", null) + values = lookup(match_expressions.value, "values", null) + } + } + } + } + + dynamic "pod_selector" { + for_each = lookup(to.value, "pod_selector", null) != null ? [to.value.pod_selector] : [] + content { + match_labels = lookup(pod_selector.value, "match_labels", null) + dynamic "match_expressions" { + for_each = lookup(pod_selector.value, "match_expressions", null) != null ? [pod_selector.value.match_expressions] : [] + content { + key = lookup(match_expressions.value, "key", null) + operator = lookup(match_expressions.value, "operator", null) + values = lookup(match_expressions.value, "values", null) + } + } + } + } + + dynamic "ip_block" { + for_each = lookup(to.value, "ip_block", null) != null ? [to.value.ip_block] : [] + content { + cidr = lookup(ip_block.value, "cidr", null) + except = lookup(ip_block.value, "except", null) + } + } + + } + } + + dynamic "ports" { + for_each = lookup(egress.value, "ports", null) != null ? egress.value.ports : [] + content { + port = ports.value.port + protocol = ports.value.protocol + } + } + + } + } + + policy_types = lookup(var.network_policies[count.index], "policy_types", ["Ingress", "Egress"]) + } + +} diff --git a/terraform/modules/kubernetes-namespace/output.tf b/terraform/modules/kubernetes-namespace/output.tf index 6307b881..6366754b 100644 --- a/terraform/modules/kubernetes-namespace/output.tf +++ b/terraform/modules/kubernetes-namespace/output.tf @@ -1,9 +1,9 @@ output "name" { value = kubernetes_namespace.this[0].metadata[0].name - description = "The URL of the created resource" + description = "The name of the created namespace (from object metadata)" } output "labels_name" { - value = null_resource.labels.triggers.name - description = "Map of the labels" + value = kubernetes_namespace.this[0].metadata[0].labels.name + description = "The value of the name label" } diff --git a/terraform/modules/kubernetes-namespace/resourcequota.tf b/terraform/modules/kubernetes-namespace/resourcequota.tf new file mode 100644 index 00000000..fb8148dc --- /dev/null +++ b/terraform/modules/kubernetes-namespace/resourcequota.tf @@ -0,0 +1,23 @@ + +resource "kubernetes_resource_quota" "this" { + count = var.enable && length(var.resource_quotas) > 0 ? length(var.resource_quotas) : 0 + + metadata { + name = var.resource_quotas[count.index].name + namespace = kubernetes_namespace.this[0].id + } + spec { + hard = var.resource_quotas[count.index].hard + scopes = lookup(var.resource_quotas[count.index], "scopes", null) + dynamic "scope_selector" { + for_each = lookup(var.resource_quotas[count.index], "scope_selector", null) != null ? [var.resource_quotas[count.index].scope_selector] : [] + content { + match_expression { + scope_name = lookup(scope_selector.value, "scope_name", null) + operator = lookup(scope_selector.value, "operator", null) + values = lookup(scope_selector.value, "values", null) + } + } + } + } +} diff --git a/terraform/modules/kubernetes-namespace/variables.tf b/terraform/modules/kubernetes-namespace/variables.tf index 66e03ce3..e5a8500e 100644 --- a/terraform/modules/kubernetes-namespace/variables.tf +++ b/terraform/modules/kubernetes-namespace/variables.tf @@ -22,7 +22,34 @@ variable "depends" { } variable "enable" { - description = "If set to true, create namespace" type = bool default = true + description = "If set to true, create namespace" +} + +variable "limits" { + type = any + default = [ + { + type = "Container" + default = { + cpu = "150m" + memory = "128Mi" + } + default_request = { + cpu = "100m" + memory = "64Mi" + } + } + ] +} + +variable "resource_quotas" { + type = any + default = [] +} + +variable "network_policies" { + type = any + default = [] } diff --git a/terraform/modules/kubernetes-network-policy-namespace/main.tf b/terraform/modules/kubernetes-network-policy-namespace/main.tf deleted file mode 100644 index dceaf895..00000000 --- a/terraform/modules/kubernetes-network-policy-namespace/main.tf +++ /dev/null @@ -1,65 +0,0 @@ -# Deny all incoming connections (include from current namespace) to any pod in current namespace -resource "kubernetes_network_policy" "deny-all" { - metadata { - name = "deny-all" - namespace = var.namespace - } - spec { - pod_selector { - } - - policy_types = ["Ingress"] - } - - depends_on = [var.depends] -} - -# Allow all connections in current namespace -resource "kubernetes_network_policy" "allow-from-this" { - metadata { - name = "allow-ingress-into-${var.namespace}" - namespace = var.namespace - } - spec { - pod_selector { - } - - ingress { - from { - pod_selector { - } - } - } - - policy_types = ["Ingress"] - } - - depends_on = [var.depends] -} - -# Allow all incoming connections from selected namespaces -resource "kubernetes_network_policy" "allow-from-ns" { - count = length(var.allow_from_namespaces) - metadata { - name = "allow-ingress-from-${var.allow_from_namespaces[count.index]}" - namespace = var.namespace - } - spec { - pod_selector { - } - - ingress { - from { - namespace_selector { - match_labels = { - name = var.allow_from_namespaces[count.index] - } - } - } - } - - policy_types = ["Ingress"] - } - - depends_on = [var.depends] -} diff --git a/terraform/modules/kubernetes-network-policy-namespace/output.tf b/terraform/modules/kubernetes-network-policy-namespace/output.tf deleted file mode 100644 index e69de29b..00000000 diff --git a/terraform/modules/kubernetes-network-policy-namespace/variables.tf b/terraform/modules/kubernetes-network-policy-namespace/variables.tf deleted file mode 100644 index 1014535c..00000000 --- a/terraform/modules/kubernetes-network-policy-namespace/variables.tf +++ /dev/null @@ -1,15 +0,0 @@ -variable "namespace" { - type = string - description = "Namespace name" -} - -variable "allow_from_namespaces" { - type = list(string) - description = "List of namespaces to allow trafic from." -} - -variable "depends" { - type = any - default = null - description = "Indicates the resource this resource depends on." -}