Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions docs/attributes.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
:tcx5-waiver: pass:[ ]
9,393 changes: 0 additions & 9,393 deletions docs/book.html

This file was deleted.

49 changes: 49 additions & 0 deletions docs/clusters/addons.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -89,6 +89,16 @@ eksctl create addon -f config.yaml
eksctl create addon --name vpc-cni --version 1.7.5 --service-account-role-arn <role-arn>
----

[,console]
----
eksctl create addon --name aws-ebs-csi-driver --namespace-config 'namespace=custom-namespace'
----

[TIP]
====
Use the `--namespace-config` flag to deploy addons to a custom namespace instead of the default namespace.
====

During addon creation, if a self-managed version of the addon already exists on the cluster, you can choose how potential `configMap` conflicts shall be resolved by setting `resolveConflicts` option via the config file, e.g.

[,yaml]
Expand Down Expand Up @@ -195,6 +205,7 @@ addons:
====
Bear in mind that when addon configuration values are being modified, configuration conflicts will arise.
====

Thus, we need to specify how to deal with those by setting the `resolveConflicts` field accordingly.
As in this scenario we want to modify these values, we'd set `resolveConflicts: overwrite`.

Expand All @@ -216,6 +227,41 @@ eksctl get addon --cluster my-cluster --output yaml
Version: v1.8.7-eksbuild.3
----

== Using custom namespace
A custom namespace can be provided in the configuration file during the creation of addons. A namespace can't be updated once an addon is created.

=== Using config file
[,yaml]
----
addons:
- name: aws-ebs-csi-driver
version: latest
namespaceConfig:
namespace: custom-namespace
----

=== Using CLI flag
Alternatively, you can specify a custom namespace using the `--namespace-config` flag:
[,console]
----
eksctl create addon --cluster my-cluster --name aws-ebs-csi-driver --namespace-config 'namespace=custom-namespace'
----

The get command will also retrieve the namespace value for the addon
[,yaml]
----
- ConfigurationValues: ""
IAMRole: ""
Issues: null
Name: aws-ebs-csi-driver
NamespaceConfig:
namespace: custom-namespace
NewerVersion: ""
PodIdentityAssociations: null
Status: ACTIVE
Version: v1.47.0-eksbuild.1
----

[[update-addons,update-addons.title]]
== Updating addons

Expand All @@ -231,6 +277,9 @@ eksctl update addon -f config.yaml
eksctl update addon --name vpc-cni --version 1.8.0 --service-account-role-arn <new-role>
----

[NOTE]
The namespace configuration cannot be updated once an addon is created. The `--namespace-config` flag is only available during addon creation.

Similarly to addon creation, When updating an addon, you have full control over the config changes that you may have previously applied on that add-on's `configMap`. Specifically, you can preserve, or overwrite them. This optional functionality is available via the same config file field `resolveConflicts`. e.g.,

[,yaml]
Expand Down
90 changes: 60 additions & 30 deletions docs/clusters/eksctl-karpenter.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -4,13 +4,11 @@
= Karpenter Support
:info_doctype: section

`eksctl` supports adding https://karpenter.sh/[Karpenter] to a newly created cluster. It will create all the necessary
`eksctl` provides support for adding https://karpenter.sh/[Karpenter] to a newly created cluster. It will create all the necessary
prerequisites outlined in Karpenter's https://karpenter.sh/docs/getting-started/[Getting Started] section including installing
Karpenter itself using Helm. We currently support installing versions starting `0.20.0` and above.
Karpenter itself using Helm. We currently support installing versions `0.28.0+`. See the https://karpenter.sh/docs/upgrading/compatibility/[Karpenter compatibility] section for further details.

Use the `eksctl` cluster config field `karpenter` to install and configure it.

The following yaml outlines a typical installation configuration:
The following cluster configuration outlines a typical Karpenter installation:

[,yaml]
----
Expand All @@ -20,14 +18,14 @@ kind: ClusterConfig
metadata:
name: cluster-with-karpenter
region: us-west-2
version: '1.24'
version: '1.32' # requires a version of Kubernetes compatible with Karpenter
tags:
karpenter.sh/discovery: cluster-with-karpenter # here, it is set to the cluster name
iam:
withOIDC: true # required

karpenter:
version: 'v0.20.0' # Exact version must be specified
version: '1.2.1' # Exact version should be specified according to the Karpenter compatibility matrix

managedNodeGroups:
- name: managed-ng-1
Expand All @@ -42,45 +40,77 @@ to be set:
[,yaml]
----
karpenter:
version: 'v0.20.0'
version: '1.2.1'
createServiceAccount: true # default is false
defaultInstanceProfile: 'KarpenterNodeInstanceProfile' # default is to use the IAM instance profile created by eksctl
withSpotInterruptionQueue: true # adds all required policies and rules for supporting Spot Interruption Queue, default is false
----

OIDC must be defined in order to install Karpenter.

Once Karpenter is successfully installed, add a https://karpenter.sh/docs/concepts/provisioners/[Provisioner] so Karpenter
can start adding the right nodes to the cluster.
Once Karpenter is successfully installed, add https://karpenter.sh/docs/concepts/nodepools/[NodePool(s)] and https://karpenter.sh/docs/concepts/nodeclasses/[NodeClass(es)] to allow Karpenter
to start adding nodes to the cluster.

The NodePool's `nodeClassRef` section must match the name of an `EC2NodeClass`. For example:

The provisioner's `instanceProfile` section must match the created `NodeInstanceProfile` role's name. For example:
[,yaml]
----
apiVersion: karpenter.sh/v1
kind: NodePool
metadata:
name: example
annotations:
kubernetes.io/description: "Example NodePool"
spec:
template:
spec:
requirements:
- key: kubernetes.io/arch
operator: In
values: ["amd64"]
- key: kubernetes.io/os
operator: In
values: ["linux"]
- key: karpenter.sh/capacity-type
operator: In
values: ["on-demand"]
- key: karpenter.k8s.aws/instance-category
operator: In
values: ["c", "m", "r"]
- key: karpenter.k8s.aws/instance-generation
operator: Gt
values: ["2"]
nodeClassRef:
group: karpenter.k8s.aws
kind: EC2NodeClass
name: example # must match the name of an EC2NodeClass
----

[,yaml]
----
apiVersion: karpenter.sh/v1alpha5
kind: Provisioner
apiVersion: karpenter.k8s.aws/v1
kind: EC2NodeClass
metadata:
name: default
name: example
annotations:
kubernetes.io/description: "Example EC2NodeClass"
spec:
requirements:
- key: karpenter.sh/capacity-type
operator: In
values: ["on-demand"]
limits:
resources:
cpu: 1000
provider:
instanceProfile: eksctl-KarpenterNodeInstanceProfile-${CLUSTER_NAME}
subnetSelector:
karpenter.sh/discovery: cluster-with-karpenter # must match the tag set in the config file
securityGroupSelector:
karpenter.sh/discovery: cluster-with-karpenter # must match the tag set in the config file
ttlSecondsAfterEmpty: 30
role: "eksctl-KarpenterNodeRole-${CLUSTER_NAME}" # replace with your cluster name
subnetSelectorTerms:
- tags:
karpenter.sh/discovery: "${CLUSTER_NAME}" # replace with your cluster name
securityGroupSelectorTerms:
- tags:
karpenter.sh/discovery: "${CLUSTER_NAME}" # replace with your cluster name
amiSelectorTerms:
- alias: al2023@latest # Amazon Linux 2023
----

Note that unless `defaultInstanceProfile` is defined, the name used for `instanceProfile` is
`eksctl-KarpenterNodeInstanceProfile-<cluster-name>`.
Note that you must specify one of `role` or `instanceProfile` for lauch nodes. If you choose to use `instanceProfile`
the name of the profile created by `eksctl` follows the pattern: `eksctl-KarpenterNodeInstanceProfile-<cluster-name>`.

## Automatic Security Group Tagging

`eksctl` automatically tags the cluster's shared node security group with `karpenter.sh/discovery` when both Karpenter is enabled (`karpenter.version` specified) and the `karpenter.sh/discovery` tag exists in `metadata.tags`. This enables AWS Load Balancer Controller compatibility.

Note with karpenter 0.32.0+, Provisioners have been deprecated and replaced by https://karpenter.sh/docs/concepts/nodepools/[NodePool].
4 changes: 2 additions & 2 deletions docs/iam/iam-policies.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -92,12 +92,12 @@ nodeGroups:
attachPolicyARNs:
- arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy
- arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy
- arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly
- arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryPullOnly
- arn:aws:iam::aws:policy/ElasticLoadBalancingFullAccess
- arn:aws:iam::1111111111:policy/kube2iam
withAddonPolicies:
autoScaler: true
imageBuilder: true
----

WARNING: If a nodegroup includes the `attachPolicyARNs` it *must* also include the default node policies, like `AmazonEKSWorkerNodePolicy`, `AmazonEKS_CNI_Policy` and `AmazonEC2ContainerRegistryReadOnly` in this example.
WARNING: If a nodegroup includes the `attachPolicyARNs` it **must** also include the default node policies, like `AmazonEKSWorkerNodePolicy`, `AmazonEKS_CNI_Policy` and `AmazonEC2ContainerRegistryPullOnly` in this example.
68 changes: 8 additions & 60 deletions docs/iam/minimum-iam-policies.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,8 @@
[#minimum-iam-policies]
= Minimum IAM policies

include::../attributes.txt[]

This document describes the minimum IAM policies needed to run the main use cases of eksctl. These are the ones used to
run the integration tests.

Expand All @@ -14,73 +16,18 @@ An AWS Managed Policy is created and administered by AWS. You cannot change the

*AmazonEC2FullAccess (AWS Managed Policy)*

----
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "ec2:*",
"Effect": "Allow",
"Resource": "*"
},
{
"Effect": "Allow",
"Action": "elasticloadbalancing:*",
"Resource": "*"
},
{
"Effect": "Allow",
"Action": "cloudwatch:*",
"Resource": "*"
},
{
"Effect": "Allow",
"Action": "autoscaling:*",
"Resource": "*"
},
{
"Effect": "Allow",
"Action": "iam:CreateServiceLinkedRole",
"Resource": "*",
"Condition": {
"StringEquals": {
"iam:AWSServiceName": [
"autoscaling.amazonaws.com",
"ec2scheduled.amazonaws.com",
"elasticloadbalancing.amazonaws.com",
"spot.amazonaws.com",
"spotfleet.amazonaws.com",
"transitgateway.amazonaws.com"
]
}
}
}
]
}
----
link:https://docs.aws.amazon.com/aws-managed-policy/latest/reference/AmazonEC2FullAccess.html[View AmazonEC2FullAccess policy definition.]

*AWSCloudFormationFullAccess (AWS Managed Policy)*

----
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"cloudformation:*"
],
"Resource": "*"
}
]
}
----
link:https://docs.aws.amazon.com/aws-managed-policy/latest/reference/AWSCloudFormationFullAccess.html[View AWSCloudFormationFullAccess policy definition.]

*EksAllAccess*

[source,json,subs="verbatim,attributes"]
----
{
"Version": "2012-10-17",
"Version": "2012-10-17",{tcx5-waiver}
"Statement": [
{
"Effect": "Allow",
Expand Down Expand Up @@ -119,9 +66,10 @@ An AWS Managed Policy is created and administered by AWS. You cannot change the

*IamLimitedAccess*

[source,json,subs="verbatim,attributes"]
----
{
"Version": "2012-10-17",
"Version": "2012-10-17",{tcx5-waiver}
"Statement": [
{
"Effect": "Allow",
Expand Down
Loading