Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cannot use custom launch template when launch_template_name is not specified #1816

Closed
tmokmss opened this issue Jan 29, 2022 · 8 comments · Fixed by #1824
Closed

Cannot use custom launch template when launch_template_name is not specified #1816

tmokmss opened this issue Jan 29, 2022 · 8 comments · Fixed by #1824

Comments

@tmokmss
Copy link

tmokmss commented Jan 29, 2022

Description

Hi team, I found a little confusing behavior about eks-managed-node-group module.

When I don't specify launch_template_name and leave it blank, the launch template will not be used by the corresponding managed node group. I've already found why it's happening (details are below), so could you judge if it's is an intended behavior?

Versions

  • Terraform: v1.1.4
  • Provider(s): registry.terraform.io/hashicorp/aws v3.73.0
  • Module: v18.2.3

Reproduction

Steps to reproduce the behavior:

Deploy a VPC, an EKS cluster, and a managed node group. To reproduce this, we must explicitly use eks-managed-node-group module, not using eks_managed_node_groups input of the EKS cluster module.

Code Snippet to Reproduce

module "vpc" {
  source = "terraform-aws-modules/vpc/aws"

  name = var.cluster_name
  cidr = "10.0.0.0/16"

  azs             = data.aws_availability_zones.available.names
  private_subnets = ["10.0.1.0/24", "10.0.2.0/24", "10.0.3.0/24"]
  public_subnets  = ["10.0.101.0/24", "10.0.102.0/24", "10.0.103.0/24"]

  enable_nat_gateway     = true
  single_nat_gateway     = true
  one_nat_gateway_per_az = false
}

module "cluster" {
  source  = "terraform-aws-modules/eks/aws"
  version = "~> 18.0"

  cluster_version = "1.21"
  cluster_name    = var.cluster_name
  vpc_id          = module.vpc.vpc_id
  subnet_ids      = module.vpc.private_subnets
}

module "managed_node_group" {
  source = "terraform-aws-modules/eks/aws//modules/eks-managed-node-group"

  name                 = "group"
  cluster_name         = var.cluster_name
  # this line is practically required
  # launch_template_name = "template_name"

  subnet_ids     = module.vpc.public_subnets
  vpc_id         = module.vpc.vpc_id
  vpc_security_group_ids = [module.cluster.node_security_group_id]
}

Expected behavior

the resulting ASG uses the custom launch template.

Actual behavior

the resulting ASG uses the default launch template.

Looking at tfstate, launch_template field in eks_node_group resource is empty.

Terminal Output Screenshot(s)

Additional context

The behavior is due to the code below. When launch_template_name is undefined, it won't register the custom launch template to the managed node group.

locals {
use_custom_launch_template = var.launch_template_name != ""

dynamic "launch_template" {
for_each = local.use_custom_launch_template ? [1] : []
content {
name = local.launch_template_name
version = local.launch_template_version
}
}

We might need another condition to define use_custom_launch_template variable.

BTW if we use eks_managed_node_groups input to deploy managed node groups, the problem won't happen because launch_template_name is always defined internally here.

launch_template_name = try(each.value.launch_template_name, each.key)

Thanks!

@philicious
Copy link
Contributor

I can confirm this. encountered the same and came to the same code-wise conclusion upon investigation

if you dont specify LT-name, it will use the EKS default one instead your custom one base on the provided LT config

@bryantbiggs
Copy link
Member

yes, with the repro provided by @tmokmss I am seeing this as well - currently looking into it this morning

@philicious
Copy link
Contributor

@bryantbiggs btw contrary to @tmokmss suggestion, I for myself experienced it while do using eks_managed_node_groups block

@bryantbiggs
Copy link
Member

@bryantbiggs btw contrary to @tmokmss suggestion, I for myself experienced it while do using eks_managed_node_groups block

I don't follow

@snowzach
Copy link

snowzach commented Feb 1, 2022

I was just looking around for solutions to my problem and came across this and wondering if it's related. I have under my EKS config.

eks_managed_node_group_defaults = {
    ami_type               = "AL2_x86_64"
    disk_size              = 50
    vpc_security_group_ids = [aws_security_group.node_group_default_sg.id]
    tags = {
      "k8s.io/cluster-autoscaler/${local.name}" = "owned"
      "k8s.io/cluster-autoscaler/enabled"       = "TRUE"
    }
  }

eks_managed_node_groups = {
    "default" = {
      instance_types = ["t3.large"]
      min_size       = 1
      max_size       = 1
      desired_size   = 1
      disk_size      = 50
      subnet_ids = data.aws_subnets.private_zone1.ids
      placement = {}
    }
}

Everytime I apply it creates a new launch template and recreates the nodes. Nothing appears to be different warranting creating a new launch template.

@bryantbiggs
Copy link
Member

@snowzach no they aren't related. can you open a new issue and paste in the output from your plan showing whats triggering the change

@antonbabenko
Copy link
Member

This issue has been resolved in version 18.2.5 🎉

@github-actions
Copy link

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@github-actions github-actions bot locked as resolved and limited conversation to collaborators Nov 14, 2022
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
5 participants