-
Notifications
You must be signed in to change notification settings - Fork 1.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Ensure serviceaccount created in correct namespace #398
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@bobdoah thanks for the PR! LGTM!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the PR 👍🏼 Few changes required
@@ -40,7 +40,7 @@ locals { | |||
} | |||
|
|||
irsa_config = { | |||
kubernetes_namespace = "kube-system" | |||
kubernetes_namespace = local.helm_config["namespace"] | |||
kubernetes_service_account = local.service_account_name | |||
create_kubernetes_namespace = false |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This needs to be changed to create_kubernetes_namespace = true
when we use namespace other than kube-system
to ensure namespace is created by the IRSA module
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@askulkarni2 May be we need to handle this in IRSA module to look for kube-system
when create_kubernetes_namespace=true
.
See below
resource "kubernetes_namespace_v1" "irsa" {
count = var.create_kubernetes_namespace && var.kubernetes_namespace != "kube-system" ? 1 : 0
metadata {
name = var.kubernetes_namespace
labels = {
"app.kubernetes.io/managed-by" = "terraform-eks-blueprints"
}
}
}
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This needs to be changed to
create_kubernetes_namespace = true
when we use namespace other thankube-system
to ensure namespace is created by the IRSA module
I'm not sure I follow the logic. In my use-case, the namespace already exists. So this wouldn't work for me. That isn't done in other instances such as for the kubernetes_dashboard module.
I would suggest the create_kubernetes_namespace
option should be configurable for each of the given addon modules. Perhaps as part of the addon_context
.
I'd be happy to look at that, but this is not the intent of this PR. I'm trying to draw these modules level with others such as the kubernetes_dashboard
that would work with an existing namespace that is not kube-system
. This behaviour did previously work.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @bobdoah I appreciate your logic. Let me go through in detail
Namespace creation is handled only by EKS Blueprints module at this moment apart from existing system namespaces like kube-system
. Users can pass any name as namespace through the below configuration and expects module to create the namespace
enable_aws_load_balancer_controller = true
# Optional
aws_load_balancer_controller_helm_config = {
name = "aws-load-balancer-controller"
chart = "aws-load-balancer-controller"
repository = "https://aws.github.io/eks-charts"
version = "1.3.1"
namespace = "my_alb_ns"
values = [templatefile("${path.module}/values.yaml", {})]
}
User defined my_alb_ns
namespace overrides the locally defined namespace. When we specify kubernetes_namespace = local.helm_config["namespace"]
and create_kubernetes_namespace = false
then the above example will introduce a defect as my_alb_ns
wont be created due to false
condition.
Yes, I just noticed that kubernetes-dashboard
config is incorrect as well. I think the solution for this would be as i mentioned in my previous comment.
- We will discuss about your suggestion allowing users to define
create_kubernetes_namespace
through helm config
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@vara-bonthu I understand what you're saying, but that doesn't fit my use case. The namespace has already been created externally, it's not kube-system
. I have no control over this, sadly. If I set it to true
I still won't be able to use this module, as-is.
I have pushed a further commit. I think this is the simplest way I can satisfy both yours and my use cases. The namespace will be created, if it is not the default kube-system
. The create_kubernetes_namespace
option can be overridden by supplying a custom irsa_config
to the module. The contents will be merged, in the same way as is done for the helm_config
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM. @vara-bonthu anything else needed here?
|
||
irsa_config = merge( | ||
local.default_irsa_config, | ||
var.irsa_config | ||
) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
remove this block
@@ -39,11 +39,16 @@ locals { | |||
serviceAccountName = local.service_account_name | |||
} | |||
|
|||
irsa_config = { | |||
kubernetes_namespace = "kube-system" | |||
default_irsa_config = { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
change this back to irsa_config
variable "irsa_config" { | ||
type = any | ||
description = "IRSA config for the aws_load_balancer_controller. " | ||
default = {} | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
remove this variable
kubernetes_service_account = local.service_account_name | ||
create_kubernetes_namespace = false | ||
create_kubernetes_namespace = var.kubernetes_namespace != "kube-system" ? true : false |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
replace this line in IRSA module
count = var.create_kubernetes_namespace ? 1 : 0 |
with the below
count = var.create_kubernetes_namespace && var.kubernetes_namespace != "kube-system" ? 1 : 0
@@ -36,7 +36,7 @@ locals { | |||
|
|||
irsa_config = { | |||
create_kubernetes_namespace = false | |||
kubernetes_namespace = "kube-system" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Here we are forcing the Cluster CA to run on kube-system
. Don't change this line
Running CA in other namespace on a user node is risky. If CA erases the node it is sitting on and some race condition occurs you may end up with cluster without both CA and capacity to run anything.
Hence its really important that...
CA stays in kube-system
OR it has restrictive PodDisruptionBudget
CA is run at high priority or is marked as a critical pod
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi @bobdoah I thought about a better way of handling this scenario. Please see my new comments to the PR.
Please follow these steps if you want to bring your own namespace.
Note:
- EKS Blueprints manages the Namespace by default
- If user wants to manage your namespace then you can pass the following config.
#---------------------------------------
# AWS LOAD BALANCER INGRESS CONTROLLER HELM ADDON
#---------------------------------------
enable_aws_load_balancer_controller = true
# Optional
aws_load_balancer_controller_helm_config = {
name = "aws-load-balancer-controller"
namespace = "my-own-namespace"
create_namespace = false
}
Give this change a try and let me know if it works for you. I will try to update the other Add-ons with the similar approach.
modules/kubernetes-addons/aws-load-balancer-controller/locals.tf
Outdated
Show resolved
Hide resolved
The serviceaccount for a module is created via the IRSA module. If the namespace of a module differs from the default, it needs to also be passed to the IRSA module. Otherwise any deployments tied to the serviceaccount will fail to deploy pods.
Create the namespace, if it is not the default "kube-system" which (typically) exists on every cluster. To allow the namespace to be created external to this module, allow the IRSA config to be overridden. This should fit both my and @vara-bonthu's proposed use cases.
@vara-bonthu updated as requested. |
Facing the same issue with multiple add-ons:
|
Applied the same try/true default to cert-manager. The
The traefik module is more complicated, as the namespace resource is created directly in the module. It could be put behind a
If that would suffice. It would, however potentially remove the resources of anyone upgrading. A |
@vara-bonthu @kcoleman731 @askulkarni2 any chance of getting this merged, now that @vara-bonthu is happy. Thanks |
@bobdoah Happy to merge now. Just checking to see if this PR is working for you? |
@bobdoah Testing now. Thanks |
@vara-bonthu Yep, this is working for me. |
Warning: Helm release "cert-manager" was created but has a failed status. Use the `helm` command to investigate the error, correct it, then run Terraform again.
with module.eks_addons_0.module.cert_manager[0].module.helm_addon.helm_release.addon[0], on .terraform/modules/eks_addons_0/modules/kubernetes-addons/helm-addon/main.tf line 1, in resource "helm_release" "addon":1: resource "helm_release" "addon" {
Warning: Helm release "ingress-nginx" was created but has a failed status. Use the `helm` command to investigate the error, correct it, then run Terraform again.
with module.eks_addons_0.module.ingress_nginx[0].module.helm_addon.helm_release.addon[0], on .terraform/modules/eks_addons_0/modules/kubernetes-addons/helm-addon/main.tf line 1, in resource "helm_release" "addon":1: resource "helm_release" "addon" {
Error: failed post-install: timed out waiting for the condition with module.eks_addons_0.module.cert_manager[0].module.helm_addon.helm_release.addon[0], on .terraform/modules/eks_addons_0/modules/kubernetes-addons/helm-addon/main.tf line 1, in resource "helm_release" "addon":1: resource "helm_release" "addon" {
Error: timed out waiting for the condition with module.eks_addons_0.module.ingress_nginx[0].module.helm_addon.helm_release.addon[0], on .terraform/modules/eks_addons_0/modules/kubernetes-addons/helm-addon/main.tf line 1, in resource "helm_release" "addon":1: resource "helm_release" "addon" { example 1 enable_cert_manager = true
cert_manager_helm_config = {
name = "cert-manager"
chart = "cert-manager"
repository = "https://charts.jetstack.io"
version = "v1.7.2"
namespace = "cert-manager"
create_namespace = true
}
enable_ingress_nginx = true
ingress_nginx_helm_config = {
name = "ingress-nginx"
chart = "ingress-nginx"
repository = "https://kubernetes.github.io/ingress-nginx"
version = "4.0.17"
values = [templatefile("${path.module}/helm_values/nginx-values.yaml", {
hostname = var.hosted_name
ssl_cert_arn = module.shr_acm.acm_certificate_arn
})]
} example 2 enable_cert_manager = true
enable_ingress_nginx = true
ingress_nginx_helm_config = {
values = [templatefile("${path.module}/helm_values/nginx-values.yaml", {
hostname = var.hosted_name
ssl_cert_arn = module.shr_acm.acm_certificate_arn
})]
} |
Same errors with tetrate_istio |
What does this PR do?
The serviceaccount for a module is created via the IRSA module. If the
namespace of a module differs from the default, it needs to also be
passed to the IRSA module. Otherwise any deployments tied to the
serviceaccount will fail to deploy pods.
This PR changes the modules so that they can be deployed in a namespace other than "kube-system".
Motivation
I deploy the aws-load-balancer to a different namespace, in my configuration. It is a security requirement. I am unable to use recent versions of this module to achieve that. This should fix that issue.
More
pre-commit run -a
with this PRFor Moderators
Additional Notes