Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

EKS 1.25 , can i just upgrade or Blueprint team need to do some work on it ? #1456

Closed
mk2134226 opened this issue Feb 25, 2023 · 9 comments · Fixed by #1494
Closed

EKS 1.25 , can i just upgrade or Blueprint team need to do some work on it ? #1456

mk2134226 opened this issue Feb 25, 2023 · 9 comments · Fixed by #1494
Labels
enhancement New feature or request

Comments

@mk2134226
Copy link

EKS 1.25 , can i just upgrade or Blueprint team need to do some work on it ?

I checked with kubeent and i got this warning , so want to make sure if it will cause any issue


Deprecated APIs removed in 1.25 <<<


KIND NAMESPACE NAME API_VERSION REPLACE_WITH (SINCE)
PodSecurityPolicy aws-for-fluent-bit policy/v1beta1 (1.21.0)
PodSecurityPolicy aws-node-termination-handler policy/v1beta1 (1.21.0)
PodSecurityPolicy eks.privileged policy/v1beta1 (1.21.0)
PodSecurityPolicy kubecost-cost-analyzer-psp policy/v1beta1 (1.21.0)

@FernandoMiguel
Copy link
Contributor

I upgraded a test cluster and the only thing I had to do was to upgrade AWS LBC to the latest release that fixes a deprecation

@askulkarni2 askulkarni2 added the enhancement New feature or request label Mar 15, 2023
@askulkarni2
Copy link
Contributor

@mk2134226 we likely need to update some add-on versions for 1.25 support. I will add this our backlog. Overriding the blueprints default versions of the add-ons with latest ones should work as @FernandoMiguel points out. I highly encourage you test this out in test environment first. Feel free to drop us a note here if you run into any issues.

@wawrzek
Copy link

wawrzek commented Mar 24, 2023

Installing 1.25 with blueprints I have a problem with the:

module.eks_blueprints_kubernetes_addons.module.aws_node_termination_handler[0].module.helm_addon.helm_release.addon[0]: Creating...
╷
│ Warning: "default_secret_name" is no longer applicable for Kubernetes v1.24.0 and above
│
│   with module.eks_blueprints_kubernetes_addons.module.aws_load_balancer_controller[0].module.helm_addon.module.irsa[0].kubernetes_service_account_v1.irsa[0],
│   on .terraform/modules/eks_blueprints_kubernetes_addons/modules/irsa/main.tf line 30, in resource "kubernetes_service_account_v1" "irsa":
│   30: resource "kubernetes_service_account_v1" "irsa" {
│
│ Starting from version 1.24.0 Kubernetes does not automatically generate a token for service accounts, in this case, "default_secret_name" will be empty
│
│ (and 2 more similar warnings elsewhere)
╵
╷
│ Error: unable to build kubernetes objects from release manifest: resource mapping not found for name: "aws-node-termination-handler" namespace: "" from "": no matches for kind "PodSecurityPolicy" in version "policy/v1beta1"
│ ensure CRDs are installed first
│
│   with module.eks_blueprints_kubernetes_addons.module.aws_node_termination_handler[0].module.helm_addon.helm_release.addon[0],
│   on .terraform/modules/eks_blueprints_kubernetes_addons/modules/kubernetes-addons/helm-addon/main.tf line 1, in resource "helm_release" "addon":
│    1: resource "helm_release" "addon" {

If you prefer, I can create a separate ticket for it.

My config:

module "eks_blueprints_kubernetes_addons" {
  source                       = "github.com/aws-ia/terraform-aws-eks-blueprints//modules/kubernetes-addons?ref=v4.26.0"
  eks_cluster_id               = module.eks.cluster_name
  eks_oidc_provider            = module.eks.oidc_provider
  eks_cluster_endpoint         = module.eks.cluster_endpoint
  eks_cluster_version          = module.eks.cluster_version
  eks_worker_security_group_id = module.eks.node_security_group_id
  auto_scaling_group_names     = module.eks.eks_managed_node_groups_autoscaling_group_names
  #K8s Add-ons
  enable_aws_load_balancer_controller = true
  enable_aws_node_termination_handler = true
  enable_cluster_autoscaler           = true
}

@LeoSpyke
Copy link

Related issue in the eks-charts repository: aws/eks-charts#856

@mvanbaak
Copy link

mvanbaak commented Apr 4, 2023

Fluent Bit can be made to work with the following extra code:

  aws_for_fluentbit_helm_config = {
    version = "0.1.24"
  }

Best solution would be a version bump here:

but this workaround makes this was-for-fluent install and run fine on EKS 1.25

@armujahid
Copy link
Contributor

armujahid commented Apr 10, 2023

aws-for-fluentbit and aws_node_termination_handler have been updated in my linked PR. We can identify other modules that can be updated by me or someone else in that same or different PR.

@itamararjuan
Copy link

Hi @FernandoMiguel - I'm trying the same as you on a test cluster but can't seem to get it to work.
The original helm chart version was 1.4.5 and I upgraded to the latest which seems to be 1.4.8 from what I can tell from this helm chart

When I try to query: kubectl get deployment -n kube-system aws-load-balancer-controller
I get this unwanted result

NAME                           READY   UP-TO-DATE   AVAILABLE   AGE
aws-load-balancer-controller   0/2     2            0           171m

Any pointers would really help : )
Thanks!

@FernandoMiguel
Copy link
Contributor

@itamararjuan 1.4.8 is the latest indeed according to https://artifacthub.io/packages/helm/aws/aws-load-balancer-controller
we had nothing else to change.
get some logs and report it in the AWS LBC repo

@itamararjuan
Copy link

itamararjuan commented Apr 11, 2023

Hey @FernandoMiguel
I found out what was the error! I am testing the Karpenter example and since Fargate can't access the IMDS system of AWS it can't seem to find the vpcId automatically ( I assume you are using a managed group of nodes and not fargate )

I had to modify the values.yml to include also the vpcId variable like so:

clusterName: ${eks_cluster_id}
region: ${aws_region}
vpcId: ${aws_vpc_id}
image:
  repository: ${repository}

for the aws-load-balancer-controller module, in addition had to wire that variable from outside like so:
(in the locals.tf file)

default_helm_values = [templatefile("${path.module}/values.yaml", {
    aws_region     = var.addon_context.aws_region_name,
    aws_vpc_id     = var.addon_context.aws_vpc_id, <---
    eks_cluster_id = var.addon_context.eks_cluster_id,
    repository     = "${var.addon_context.default_repository}/amazon/aws-load-balancer-controller"
  })]

also of course had to add this variable to the declaration of the addon_context since that property didn't exist there.
I'll open a PR for the blue print team to look at only for fargate clusters

and now it works! :-)

NAME                           READY   UP-TO-DATE   AVAILABLE   AGE
aws-load-balancer-controller   2/2     2            2           55s

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
No open projects
8 participants