From 198b259ba10b1cb467fb1dc73f26c6ef250bf0fb Mon Sep 17 00:00:00 2001 From: dmkononenko <55179680+dmkononenko@users.noreply.github.com> Date: Fri, 23 Apr 2021 21:09:54 +0600 Subject: [PATCH 1/5] Update README.md --- README.md | 14 +++++++++++--- 1 file changed, 11 insertions(+), 3 deletions(-) diff --git a/README.md b/README.md index 00a1b4cb..c0c39ab7 100644 --- a/README.md +++ b/README.md @@ -24,11 +24,19 @@ This repository contains the know-how of the Mad Devs team for the rapid deployment of a Kubernetes cluster, supporting services, and the underlying infrastructure in the Amazon cloud. The main development and delivery tool is [terraform](https://www.terraform.io/) -In our company’s work, we have tried many infrastructure solutions and services and traveled the path from on-premise hardware to serverless. As of today, Kubernetes has become our standard platform for deploying applications, and AWS has become the main cloud. It is worth noting here that although 90% of our and our clients’ projects are hosted on AWS and [AWS EKS](https://aws.amazon.com/eks/) is used as the Kubernetes platform, we do not insist, do not drag everything to Kubernetes, and do not force anyone to be hosted on AWS. Kubernetes is offered only after the collection and analysis of service architecture requirements. And then, when choosing Kubernetes, it makes almost no difference to applications how the cluster itself is created—manually, through kops or using managed services from cloud providers—in essence, the Kubernetes platform is the same everywhere. So the choice of a particular provider is then made based on additional requirements, expertise, etc. +In our company’s work, we have tried many infrastructure solutions and services and traveled the path from on-premise hardware to serverless. As of today, Kubernetes has become our standard platform for deploying applications, and AWS has become the main cloud. -We know that the current implementation is far from being perfect. For example, we deploy services to the cluster using `terraform`: it is rather clumsy and against the Kuber approaches, but it is convenient for bootstrap because, by using state and interpolation, we convey proper `IDs`, `ARNs`, and other attributes to resources and names or secrets to templates and generate values ​​from them for the required charts all within terraform. There are more specific drawbacks: the `data "template_file"` resources that we used for most templates are extremely inconvenient for development and debugging, especially if there are 500+ line rolls like `terraform/layer2-k8s/templates/elk-values.yaml`. Also, despite `helm3` got rid of the `tiller`, a large number of helm releases still at some point leads to plan hanging. Partially, but not always, it can be solved by `terraform apply -target`, but for the consistency of the state, it is desirable to execute `plan` and `apply` on the entire configuration. If you are going to use this boilerplate, it is advisable to split the `terraform/layer2-k8s` layer into several ones, taking out large and complex releases into separate modules. +It is worth noting here that although 90% of our and our clients’ projects are hosted on AWS and [AWS EKS](https://aws.amazon.com/eks/) is used as the Kubernetes platform, we do not insist, do not drag everything to Kubernetes, and do not force anyone to be hosted on AWS. Kubernetes is offered only after the collection and analysis of service architecture requirements. -You may reasonably question the number of `.tf` files. This monolith certainly should be refactored and split into many micro-modules adopting `terragrunt` approach. This is exactly what we will do in the near future, solving along the way the problems described above. +And then, when choosing Kubernetes, it makes almost no difference to applications how the cluster itself is created—manually, through kops or using managed services from cloud providers—in essence, the Kubernetes platform is the same everywhere. So the choice of a particular provider is then made based on additional requirements, expertise, etc. + +We know that the current implementation is far from being perfect. For example, we deploy services to the cluster using terraform: it is rather clumsy and against the Kuber approaches, but it is convenient for bootstrap because, by using state and interpolation, we convey proper IDs, ARNs, and other attributes to resources and names or secrets to templates and generate values ​​from them for the required charts all within terraform. + +There are more specific drawbacks: the data "template_file" resources that we used for most templates are extremely inconvenient for development and debugging, especially if there are 500+ line rolls like terraform/layer2-k8s/templates/elk-values.yaml. Also, despite helm3 got rid of the tiller, a large number of helm releases still at some point leads to plan hanging. + +Partially, but not always, it can be solved by terraform apply -target, but for the consistency of the state, it is desirable to execute plan and apply on the entire configuration. If you are going to use this boilerplate, it is advisable to split the terraform/layer2-k8s layer into several ones, taking out large and complex releases into separate modules. + +You may reasonably question the number of .tf files. This monolith certainly should be refactored and split into many micro-modules adopting terragrunt approach. This is exactly what we will do in the near future, solving along the way the problems described above. ## Table of contents From f5df82583d6b3d29b4e40ffd01fa0b2253c7cce6 Mon Sep 17 00:00:00 2001 From: dmkononenko <55179680+dmkononenko@users.noreply.github.com> Date: Fri, 23 Apr 2021 21:22:54 +0600 Subject: [PATCH 2/5] Update README.md --- README.md | 4 +--- 1 file changed, 1 insertion(+), 3 deletions(-) diff --git a/README.md b/README.md index c0c39ab7..fbaaa5fd 100644 --- a/README.md +++ b/README.md @@ -32,9 +32,7 @@ And then, when choosing Kubernetes, it makes almost no difference to application We know that the current implementation is far from being perfect. For example, we deploy services to the cluster using terraform: it is rather clumsy and against the Kuber approaches, but it is convenient for bootstrap because, by using state and interpolation, we convey proper IDs, ARNs, and other attributes to resources and names or secrets to templates and generate values ​​from them for the required charts all within terraform. -There are more specific drawbacks: the data "template_file" resources that we used for most templates are extremely inconvenient for development and debugging, especially if there are 500+ line rolls like terraform/layer2-k8s/templates/elk-values.yaml. Also, despite helm3 got rid of the tiller, a large number of helm releases still at some point leads to plan hanging. - -Partially, but not always, it can be solved by terraform apply -target, but for the consistency of the state, it is desirable to execute plan and apply on the entire configuration. If you are going to use this boilerplate, it is advisable to split the terraform/layer2-k8s layer into several ones, taking out large and complex releases into separate modules. +There are more specific drawbacks: the data "template_file" resources that we used for most templates are extremely inconvenient for development and debugging, especially if there are 500+ line rolls like terraform/layer2-k8s/templates/elk-values.yaml. Also, despite helm3 got rid of the tiller, a large number of helm releases still at some point leads to plan hanging. Partially, but not always, it can be solved by terraform apply -target, but for the consistency of the state, it is desirable to execute plan and apply on the entire configuration. If you are going to use this boilerplate, it is advisable to split the terraform/layer2-k8s layer into several ones, taking out large and complex releases into separate modules. You may reasonably question the number of .tf files. This monolith certainly should be refactored and split into many micro-modules adopting terragrunt approach. This is exactly what we will do in the near future, solving along the way the problems described above. From b8e37bca644f5a7b8d07d10cddfe9d17ee0c1a30 Mon Sep 17 00:00:00 2001 From: dmkononenko <55179680+dmkononenko@users.noreply.github.com> Date: Fri, 23 Apr 2021 21:30:41 +0600 Subject: [PATCH 3/5] Update README.md --- README.md | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/README.md b/README.md index fbaaa5fd..e9abecc0 100644 --- a/README.md +++ b/README.md @@ -22,17 +22,17 @@ ## Description -This repository contains the know-how of the Mad Devs team for the rapid deployment of a Kubernetes cluster, supporting services, and the underlying infrastructure in the Amazon cloud. The main development and delivery tool is [terraform](https://www.terraform.io/) +This repository contains the know-how of the Mad Devs team for the rapid deployment of a Kubernetes cluster, supporting services, and the underlying infrastructure in the Amazon cloud. The main development and delivery tool is [terraform](https://www.terraform.io/). -In our company’s work, we have tried many infrastructure solutions and services and traveled the path from on-premise hardware to serverless. As of today, Kubernetes has become our standard platform for deploying applications, and AWS has become the main cloud. +In our company’s work, we have tried many infrastructure solutions and services and traveled the path from on-premise hardware to serverless. As of today, Kubernetes has become our standard platform for deploying applications, and AWS has become the main cloud. It is worth noting here that although 90% of our and our clients’ projects are hosted on AWS and [AWS EKS](https://aws.amazon.com/eks/) is used as the Kubernetes platform, we do not insist, do not drag everything to Kubernetes, and do not force anyone to be hosted on AWS. Kubernetes is offered only after the collection and analysis of service architecture requirements. And then, when choosing Kubernetes, it makes almost no difference to applications how the cluster itself is created—manually, through kops or using managed services from cloud providers—in essence, the Kubernetes platform is the same everywhere. So the choice of a particular provider is then made based on additional requirements, expertise, etc. -We know that the current implementation is far from being perfect. For example, we deploy services to the cluster using terraform: it is rather clumsy and against the Kuber approaches, but it is convenient for bootstrap because, by using state and interpolation, we convey proper IDs, ARNs, and other attributes to resources and names or secrets to templates and generate values ​​from them for the required charts all within terraform. +We know that the current implementation is far from being perfect. For example, we deploy services to the cluster using `terraform`: it is rather clumsy and against the Kuber approaches, but it is convenient for bootstrap because, by using state and interpolation, we convey proper `IDs`, `ARNs`, and other attributes to resources and names or secrets to templates and generate values ​​from them for the required charts all within terraform. -There are more specific drawbacks: the data "template_file" resources that we used for most templates are extremely inconvenient for development and debugging, especially if there are 500+ line rolls like terraform/layer2-k8s/templates/elk-values.yaml. Also, despite helm3 got rid of the tiller, a large number of helm releases still at some point leads to plan hanging. Partially, but not always, it can be solved by terraform apply -target, but for the consistency of the state, it is desirable to execute plan and apply on the entire configuration. If you are going to use this boilerplate, it is advisable to split the terraform/layer2-k8s layer into several ones, taking out large and complex releases into separate modules. +There are more specific drawbacks: the `data "template_file"` resources that we used for most templates are extremely inconvenient for development and debugging, especially if there are 500+ line rolls like `terraform/layer2-k8s/templates/elk-values.yaml`. Also, despite `helm3` got rid of the `tiller`, a large number of helm releases still at some point leads to plan hanging. Partially, but not always, it can be solved by `terraform apply -target`, but for the consistency of the state, it is desirable to execute `plan` and `apply` on the entire configuration. If you are going to use this boilerplate, it is advisable to split the `terraform/layer2-k8s` layer into several ones, taking out large and complex releases into separate modules. You may reasonably question the number of .tf files. This monolith certainly should be refactored and split into many micro-modules adopting terragrunt approach. This is exactly what we will do in the near future, solving along the way the problems described above. From f6a289618b1d8f0b5674870d626b8c9142d56cac Mon Sep 17 00:00:00 2001 From: dmkononenko <55179680+dmkononenko@users.noreply.github.com> Date: Fri, 23 Apr 2021 21:33:15 +0600 Subject: [PATCH 4/5] Update README-RU.md --- README-RU.md | 12 +++++++++--- 1 file changed, 9 insertions(+), 3 deletions(-) diff --git a/README-RU.md b/README-RU.md index f47b8568..ee35554b 100644 --- a/README-RU.md +++ b/README-RU.md @@ -22,11 +22,17 @@ ## Описание -В данном репозитории собраны наработки команды Mad Devs для быстрого развертывания Kubernetes кластера, вспомогательных сервисов и нижележащей инфраструктуры в облаке Amazon. Основным инструментом разработки и поставки является [terraform](https://www.terraform.io/) +В данном репозитории собраны наработки команды Mad Devs для быстрого развертывания Kubernetes кластера, вспомогательных сервисов и нижележащей инфраструктуры в облаке Amazon. Основным инструментом разработки и поставки является [terraform](https://www.terraform.io/). -За время работы компании мы перепробовали много инфраструктурных решений и сервисов, и прошли путь от on-premise железа до serverless. В итоге на текущий момент нашей стандартной платформой для развертывания приложений стал Kubernetes, а основным облаком - AWS. Тут стоит отметить, что несмотря на то, что 90% наших и клиентских проектов хостится на AWS, а в качестве Kubernetes платформы используется [AWS EKS](https://aws.amazon.com/eks/), мы не упираемся рогом, не тащим все подряд в Kubernetes и не заставляем хостится в AWS. Kubernetes предлагается только после сбора и анализа требований к архитектуре сервиса. А далее при выборе Kubernetes - приложениям почти не важно, как создан сам кластер - вручную, через kops или используя managed услуги облачных провайдеров - в своей основе платформа Kubernetes везде одинакова. И выбор конкретного провайдера уже складывается из дополнительный требований, экспертизы и т.д. +За время работы компании мы перепробовали много инфраструктурных решений и сервисов, и прошли путь от on-premise железа до serverless. В итоге на текущий момент нашей стандартной платформой для развертывания приложений стал Kubernetes, а основным облаком - AWS. -Мы знаем, что текущая реализация далеко не идеальна. Например, в кластер мы деплоим сервисы с помощью `terraform` - это довольно топорно и против подходов кубера, но это удобно для бутстрапа - т.к. используя стейт и интерполяцию, мы передаем необходимые `ids`, `arns` и другие указатели на ресурсы и имена или секреты в шаблоны и генерим из них `values` для нужных чартов, не выходя за пределы терраформа. Есть более специфичные минусы: ресурсы `data "template_file"`, которые мы использовали для большинства шаблонов, крайне неудобны для разработки и отладки, особенно если это такие 500+ строчные рулоны, типа `terraform/layer2-k8s/templates/elk-values.yaml`. Также, смотря на `helm3` и избавление от `tiller` - большое количество helm-релизов все равно в какой-то момент приводит к зависанию плана. Частично, но не всегда решается путем таргетированного апплая `terraform apply -target`, но для консистентности стейта желательно выполнять `plan` и `apply` целиком на всей конфигурации. Если собираетесь использовать данный бойлер, желательно разбить слой `terraform/layer2-k8s` на несколько, вынеся крупные и комплексные релизы в отдельные подслои. +Тут стоит отметить, что несмотря на то, что 90% наших и клиентских проектов хостится на AWS, а в качестве Kubernetes платформы используется [AWS EKS](https://aws.amazon.com/eks/), мы не упираемся рогом, не тащим все подряд в Kubernetes и не заставляем хостится в AWS. Kubernetes предлагается только после сбора и анализа требований к архитектуре сервиса. + +А далее при выборе Kubernetes - приложениям почти не важно, как создан сам кластер - вручную, через kops или используя managed услуги облачных провайдеров - в своей основе платформа Kubernetes везде одинакова. И выбор конкретного провайдера уже складывается из дополнительный требований, экспертизы и т.д. + +Мы знаем, что текущая реализация далеко не идеальна. Например, в кластер мы деплоим сервисы с помощью `terraform` - это довольно топорно и против подходов кубера, но это удобно для бутстрапа - т.к. используя стейт и интерполяцию, мы передаем необходимые `ids`, `arns` и другие указатели на ресурсы и имена или секреты в шаблоны и генерим из них `values` для нужных чартов, не выходя за пределы терраформа. + +Есть более специфичные минусы: ресурсы `data "template_file"`, которые мы использовали для большинства шаблонов, крайне неудобны для разработки и отладки, особенно если это такие 500+ строчные рулоны, типа `terraform/layer2-k8s/templates/elk-values.yaml`. Также, смотря на `helm3` и избавление от `tiller` - большое количество helm-релизов все равно в какой-то момент приводит к зависанию плана. Частично, но не всегда решается путем таргетированного апплая `terraform apply -target`, но для консистентности стейта желательно выполнять `plan` и `apply` целиком на всей конфигурации. Если собираетесь использовать данный бойлер, желательно разбить слой `terraform/layer2-k8s` на несколько, вынеся крупные и комплексные релизы в отдельные подслои. Могут возникнуть справедливые вопросы к количеству `.tf` файлов. Оно конечно просится на рефакторинг и "обмодуливание". Чем мы и займемся в ближайшее время, разбивая этот монолит на микромодули и вводя `terragrunt`, попутно решая озвученные проблемы выше. From 0efd133fb9ab0b99b27a6d5e77bafd4bbd05598c Mon Sep 17 00:00:00 2001 From: mglotov <37855803+mglotov@users.noreply.github.com> Date: Fri, 23 Apr 2021 22:38:02 +0600 Subject: [PATCH 5/5] Update README-RU.md --- README-RU.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README-RU.md b/README-RU.md index ee35554b..4853fdac 100644 --- a/README-RU.md +++ b/README-RU.md @@ -28,7 +28,7 @@ Тут стоит отметить, что несмотря на то, что 90% наших и клиентских проектов хостится на AWS, а в качестве Kubernetes платформы используется [AWS EKS](https://aws.amazon.com/eks/), мы не упираемся рогом, не тащим все подряд в Kubernetes и не заставляем хостится в AWS. Kubernetes предлагается только после сбора и анализа требований к архитектуре сервиса. -А далее при выборе Kubernetes - приложениям почти не важно, как создан сам кластер - вручную, через kops или используя managed услуги облачных провайдеров - в своей основе платформа Kubernetes везде одинакова. И выбор конкретного провайдера уже складывается из дополнительный требований, экспертизы и т.д. +А далее при выборе Kubernetes - приложениям почти не важно, как создан сам кластер - вручную, через kops или используя managed услуги облачных провайдеров - в своей основе платформа Kubernetes везде одинакова. И выбор конкретного провайдера уже складывается из дополнительных требований, экспертизы и т.д. Мы знаем, что текущая реализация далеко не идеальна. Например, в кластер мы деплоим сервисы с помощью `terraform` - это довольно топорно и против подходов кубера, но это удобно для бутстрапа - т.к. используя стейт и интерполяцию, мы передаем необходимые `ids`, `arns` и другие указатели на ресурсы и имена или секреты в шаблоны и генерим из них `values` для нужных чартов, не выходя за пределы терраформа.