Skip to content

Releases: cloudposse/terraform-aws-eks-node-group

v0.11.0

07 Sep 02:16
7a1248f
Compare
Choose a tag to compare

🚀 Enhancements

Optional Create before destroy. Add Launch Template and related features. @Nuru (#31)

what

Implement "create before destroy" for zero downtime node group updates. This is optional and off by default, because on first use it will cause any existing node groups created with this module to be destroyed and then replaced, causing the same kind of outage this feature will prevent after it is activated.

  • Because node groups must have unique names within a cluster, creating a new node group before destroying the old one requires node groups to have random names. This is implemented by adding a 1-word random pet name to the end of the static node group name. Turning this on (or turning it off after it has been on) will cause previously created node groups to be replaced because of the change in name.

Add features previously missing here but present in terraform-aws-eks-workers, to the extent supported by AWS, such as

  • Set nodes to launch with Kubernetes taints
  • Specify launch template (not all features supported by AWS, see "Launch template - Prohibited" in AWS documentation)
  • Specify AMI for nodes
  • Arbitrary bootstrap.sh options
  • Arbitrary kubelet options
  • "Before join" and "after join" scripting

why

  • Many kinds of node group changes require Terraform to replace the existing node group with a new one. The default Terraform behavior is to delete the old resource before creating the new one, since many resources (such as node group) require unique names, so you cannot create the new resource while the old one exists. However, this results in the node group being completely destroyed, and therefore offline for several minutes, which is usually an unacceptable outage. Now you can avoid this by setting create_before_destroy = true.
  • Useful features previously unavailable, bring closer to feature parity with terraform-aws-eks-workers.

caveats

When using create before destroy

We cannot automatically detect when the node_group will be destroyed and generate a new name for it. Instead, we have tried to cause a new name to be generated when anything changes that would cause a the node group to be destroyed. This may not be perfect. If the name changes unnecessarily, it will trigger a node group replacement, which should be tolerable. If the name fails to change when it needs to, the Terraform apply will fail with an error about the resource already existing. Please let us know what change we missed so we can update the module. Meanwhile, you can get around this by manually "tainting" the random_pet as explained below.

For a short period of time you will be running 2 node groups.

  • There may still be service outages related to pods and EBS volumes transferring from the old node group to the new one, though this should generally behave like the cluster rapidly scaling up and rapidly scaling back down. If you have issues with autoscaling, such as running single replicas with minAvailable: 25% (which rounds up to minAvailable: 1), preventing the pod from being drained from a node, you may have issues with node groups being replaced.
  • Your AWS service quotas need to be large enough to run 2 sets of node groups at the same time. If you do not have enough quota for that, launching the new node group will fail. If the new node group launch fails, you will need to manually taint the random_pet resource because while Terraform tries to replace the tainted new node group, it will try to do so with the same name (and fail) unless you also taint random_pet. Assuming you invoked the module as module "eks_node_group", you would taint random_pet with
terraform taint 'module.eks_node_group.random_pet.cbd[0]'

Using new features

Many of the new features of this module rely on new AWS features, and it is unclear to what extent they actually work.

  • It appears that it is still not possible to tag the Auto Scaling Group or Launch Template with extra tags for the Kubernetes Cluster Autoscaler.
  • It appears that it is still not possible to propagate tags to elastic GPUs or spot instance requests.
  • There may be other issues similarly beyond our control.
  • There are many new features in this module and it has not be comprehensively tested, so be cautious and test your use cases out on non-critical clusters before moving this into production.

Most of the new features require this module to create a Launch Template, and of course you can now supply your own launch template (referenced by name). There is some overlap between settings that can be made directly on an EKS managed node group and some that can be made in a launch template. This results in settings being allowed in one place and not in the other: these limitations and prohibitions are detailed in the AWS documentation. This module attempts to resolve these differences in many cases, but some limitations remain:

  • Support for remote access using SSH is not supported when using a launch template created by this module. Correctly configuring the launch template for remote access is tricky because it interferes with automatic configuration of access by the Kubernetes control plane. We do not need it and cannot test it at this time, so we do not support it, but if you need it, you can create your own launch template that has the desired configuration and leave the ec2_ssh_key setting null.
  • If you supply the Launch Template, this module requires that the Launch Template specify the AMI Image ID to use. This requirement could be relaxed in the future if we find demand for it.
  • In general, this module assumes you are using an Amazon Linux 2 AMI, and supports selecting the AMI by Kubernetes version or AMI release version. If you are using some other AMI that does not support Amazon's bootstrap.sh, most of the new features will not work. You will need to implement them yourself on your AMI. You can provide arbitrary (Base64 encoded) User Data to your AMI via userdata_override.
  • No support for spot instances specified by launch template (EKS limitation).
  • No support for shutdown behavior or "Stop - Hibernate" behavior in launch template (EKS limitation).
  • No support for IAM instance profile or Subnets via Launch Template (EKS limitation). You can still supply subnets via subnet_ids and the module will apply the via the node group configuration.

references

Many of the new features are made possible by EKS adding support for launch templates.

v0.10.0

30 Aug 14:05
ac814c6
Compare
Choose a tag to compare
v0.10.0 Pre-release
Pre-release
Fixing issues w/ userdata @danjbh (#28)

what

After further testing, I discovered that the default userdata is being added to the end of the custom userdata we're supplying via our launch template. This causes bootstrap.sh to be called twice and creates a condition where a node group fails to provision correctly in some circumstances. And after digging further, aws_eks_node_group appears to be doing a bit of trickery w/ the launch templates under the hood, contrary to our initial expectation that the userdata we were supplying would act as an override.

We'll need to revisit this once there is more information/documentation available on the exact behavior and whether or not it's possible to completely override userdata when using aws_eks_node_group.

Anyhow, for now I propose that we just support before_cluster_joining_userdata and omit the rest of the userdata options. This will provide us with the proper tag propagation, as well as the ability to add some custom provisioning to to the node as requested by the community.

why

  • The latest version the module may not work at all for some folks unfortunately

UPDATE

After further research, I found the following in the introduction blog post for this feature...

Note that user data is used by EKS to insert the EKS bootstrap invocation for your managed nodes. EKS will automatically merge this in for you, unless a custom AMI is specified. In that case, you’ll need to add that in.

So hypothetically, if we supply our AMI configuration option (which presents it's own challenges), we should be able to override the userdata completely and supply our own kubelet arguments directly (e.g. taints). We can discuss this approach in another issue/PR, but I say for now we proceed with this PR and get the first couple bits of functionality working reliably. We'll regroup and proceed from there.

references

https://aws.amazon.com/blogs/containers/introducing-launch-template-and-custom-ami-support-in-amazon-eks-managed-node-groups/

v0.9.0

29 Aug 03:18
842e0a6
Compare
Choose a tag to compare
v0.9.0 Pre-release
Pre-release
Adding support for launch templates & userdata parameters @danjbh (#27)

what

  • Adding default launch template configuration
  • Adding ability to provide your own launch template by overriding the launch template id & version
  • Adding dynamic config options for user_data
  • Bumping various upstream module versions & tests
  • Keeping instance_types as a list but adding TF 0.13 variable validation

why

In previous version of the AWS provider (2.x), you could not define your own launch template for aws_eks_node_group. Additionally, the tags specified in the aws_eks_node_group definition were not being passed down to the EC2 instances created by the ASG, which made tasks like monitoring and cost tracking difficult.

The latest versions of the AWS provider (3.x) give us the ability to specify our own launch template directly from aws_eks_node_group, which allows us to set our own options (e.g. tag_specifications. user_data, etc.).

This also should satisfy the requests in #24

references

v0.8.0

28 Aug 07:25
569cdb9
Compare
Choose a tag to compare
Convert to context.tf, allow AWS 3.0 provider @Nuru (#26)

what

  • Allow AWS provider version 3.0
  • Convert to context.tf, update chatops, add auto-release

why

  • New features, specifically launch templates
  • Standardize

0.7.1: [AUTOMATED] Update terraform-null-label versions to support Terraform…

18 Aug 07:46
592a34d
Compare
Choose a tag to compare
… 0.13 (#23)

* [AUTOMATED] Update terraform-null-label versions to support Terraform 0.13

* Updated README.md

Co-authored-by: actions-bot <58130806+actions-bot@users.noreply.github.com>

0.7.0: [AUTOMATED] Update Version Pinning for Terraform to support 0.13 (#22)

18 Aug 07:45
2550397
Compare
Choose a tag to compare
## What

1. Update Version Pinning for Terraform to support 0.13

## Why

1. This is a relatively minor update that the CloudPosse module already likely supports.
1. This allows module consumers to not individually update our Terraform module to support Terraform 0.13.

0.6.0 Add support for `environment` input. Update tests

16 Jul 18:16
052c59c
Compare
Choose a tag to compare

what

  • Add support for the environment attribute that has been added to terraform-null-label
  • Update tests

why

  • environment attribute is used for naming AWS resources
  • Bring the tests up to date, use Go modules and latest k8s client libraries

0.5.0: Updates to ChatOps - Automated commit (#20)

14 Jul 04:46
f57b87b
Compare
Choose a tag to compare
## What
* Adds chatops commands
  - '/test all'
  - '/test bats'
  - '/test readme'
  - '/test terratest'
* Drops codefresh
* Drops slash-command-dispatch
* Removes codefresh badge
* Rebuilds README

## Why
* Change over from codefresh to GH Actions
* Facilitate testing of PRs from forks

0.4.2

18 Jun 17:17
e3a603c
Compare
Choose a tag to compare

PR #19

  • Ignore external changes to desired group size, closes #12
  • Use current "partition" in hard-coded ARNs, closes #16, thank you @woz5999
  • Remove erroneous assignment from README, closes #5, thank you @MPV

0.4.1: Add optional variable to transmit "depends_on" dependency (#15)

01 May 00:57
e248c50
Compare
Choose a tag to compare

Add module_depends_on variable to allow user to force this module to wait for the creation of an arbitrary resource before creating the node group