Description
AWS EKS cluster version: 1.27
Karpenter version 0.28.1
karpenter pod logs:
2025-06-12T06:58:42.083Z ERROR controller.provisioner Could not schedule pod, incompatible with provisioner "karpenter-dev-provisioner-helper", daemonset overhead={"cpu":"736m","memory":"1452Mi","pods":"11"}, incompatible requirements, label "node-role.kubernetes.io/control-plane" does not have known values; incompatible with provisioner "karpenter-dev-provisioner-primary", daemonset overhead={"cpu":"736m","memory":"1452Mi","pods":"11"}, incompatible requirements, label "node-role.kubernetes.io/control-plane" does not have known values; incompatible with provisioner "karpenter-dev-provisioner-default", daemonset overhead={"cpu":"736m","memory":"1452Mi","pods":"11"}, incompatible requirements, label "node-role.kubernetes.io/control-plane" does not have known values; incompatible with provisioner "karpenter-dev-provisioner-elappnextgen", daemonset overhead={"cpu":"736m","memory":"1452Mi","pods":"11"}, incompatible requirements, label "node-role.kubernetes.io/control-plane" does not have known values; incompatible with provisioner "karpenter-dev-provisioner-on-demand", daemonset overhead={"cpu":"736m","memory":"1452Mi","pods":"11"}, incompatible requirements, label "node-role.kubernetes.io/control-plane" does not have known values; incompatible with provisioner "karpenter-dev-provisioner-tiny", daemonset overhead={"cpu":"736m","memory":"1452Mi","pods":"11"}, incompatible requirements, label "node-role.kubernetes.io/control-plane" does not have known values; incompatible with provisioner "karpenter-dev-provisioner-upgrade-test", daemonset overhead={"cpu":"736m","memory":"1452Mi","pods":"11"}, incompatible requirements, label "node-role.kubernetes.io/control-plane" does not have known values; incompatible with provisioner "karpenter-dev-provisioner-core-services", daemonset overhead={"cpu":"736m","memory":"1452Mi","pods":"11"}, incompatible requirements, label "node-role.kubernetes.io/control-plane" does not have known values {"commit": "30fa8f3-dirty", "pod": "kube-system/csi-oci-controller-7b9965bf5b-tzzpx"} 2025-06-12T06:58:42.083Z ERROR controller.provisioner Could not schedule pod, incompatible with provisioner "karpenter-dev-provisioner-helper", daemonset overhead={"cpu":"736m","memory":"1452Mi","pods":"11"}, incompatible requirements, label "project" does not have known values; incompatible with provisioner "karpenter-dev-provisioner-primary", daemonset overhead={"cpu":"736m","memory":"1452Mi","pods":"11"}, incompatible requirements, key project, project In [upgrade-test] not in project In [elappnextgen]; incompatible with provisioner "karpenter-dev-provisioner-default", daemonset overhead={"cpu":"736m","memory":"1452Mi","pods":"11"}, incompatible requirements, label "project" does not have known values; incompatible with provisioner "karpenter-dev-provisioner-elappnextgen", daemonset overhead={"cpu":"736m","memory":"1452Mi","pods":"11"}, incompatible requirements, key project, project In [upgrade-test] not in project In [elappnextgen]; incompatible with provisioner "karpenter-dev-provisioner-on-demand", daemonset overhead={"cpu":"736m","memory":"1452Mi","pods":"11"}, incompatible requirements, label "project" does not have known values; incompatible with provisioner "karpenter-dev-provisioner-tiny", daemonset overhead={"cpu":"736m","memory":"1452Mi","pods":"11"}, incompatible requirements, label "project" does not have known values; incompatible with provisioner "karpenter-dev-provisioner-upgrade-test", daemonset overhead={"cpu":"736m","memory":"1452Mi","pods":"11"}, incompatible requirements, label "project" does not have known values; incompatible with provisioner "karpenter-dev-provisioner-core-services", daemonset overhead={"cpu":"736m","memory":"1452Mi","pods":"11"}, incompatible requirements, label "project" does not have known values {"commit": "30fa8f3-dirty", "pod": "upgrade-eks-123/upgrade-test-deployment-7858cbd888-r8zpg"} 2025-06-12T06:58:42.083Z ERROR controller.provisioner Could not schedule pod, incompatible with provisioner "karpenter-dev-provisioner-helper", daemonset overhead={"cpu":"736m","memory":"1452Mi","pods":"11"}, incompatible requirements, label "project" does not have known values; incompatible with provisioner "karpenter-dev-provisioner-primary", daemonset overhead={"cpu":"736m","memory":"1452Mi","pods":"11"}, incompatible requirements, key project, project In [upgrade-test] not in project In [elappnextgen]; incompatible with provisioner "karpenter-dev-provisioner-default", daemonset overhead={"cpu":"736m","memory":"1452Mi","pods":"11"}, incompatible requirements, label "project" does not have known values; incompatible with provisioner "karpenter-dev-provisioner-elappnextgen", daemonset overhead={"cpu":"736m","memory":"1452Mi","pods":"11"}, incompatible requirements, key project, project In [upgrade-test] not in project In [elappnextgen]; incompatible with provisioner "karpenter-dev-provisioner-on-demand", daemonset overhead={"cpu":"736m","memory":"1452Mi","pods":"11"}, incompatible requirements, label "project" does not have known values; incompatible with provisioner "karpenter-dev-provisioner-tiny", daemonset overhead={"cpu":"736m","memory":"1452Mi","pods":"11"}, incompatible requirements, label "project" does not have known values; incompatible with provisioner "karpenter-dev-provisioner-upgrade-test", daemonset overhead={"cpu":"736m","memory":"1452Mi","pods":"11"}, incompatible requirements, label "project" does not have known values; incompatible with provisioner "karpenter-dev-provisioner-core-services", daemonset overhead={"cpu":"736m","memory":"1452Mi","pods":"11"}, incompatible requirements, label "project" does not have known values {"commit": "30fa8f3-dirty", "pod": "upgrade-eks-123/upgrade-test-deployment-7858cbd888-9c7cq"} 2025-06-12T06:58:42.083Z ERROR controller.provisioner Could not schedule pod, incompatible with provisioner "karpenter-dev-provisioner-helper", daemonset overhead={"cpu":"736m","memory":"1452Mi","pods":"11"}, incompatible requirements, label "project" does not have known values; incompatible with provisioner "karpenter-dev-provisioner-primary", daemonset overhead={"cpu":"736m","memory":"1452Mi","pods":"11"}, incompatible requirements, key project, project In [asterisk-preprod] not in project In [elappnextgen]; incompatible with provisioner "karpenter-dev-provisioner-default", daemonset overhead={"cpu":"736m","memory":"1452Mi","pods":"11"}, incompatible requirements, label "project" does not have known values; incompatible with provisioner "karpenter-dev-provisioner-elappnextgen", daemonset overhead={"cpu":"736m","memory":"1452Mi","pods":"11"}, incompatible requirements, key project, project In [asterisk-preprod] not in project In [elappnextgen]; incompatible with provisioner "karpenter-dev-provisioner-on-demand", daemonset overhead={"cpu":"736m","memory":"1452Mi","pods":"11"}, incompatible requirements, label "project" does not have known values; incompatible with provisioner "karpenter-dev-provisioner-tiny", daemonset overhead={"cpu":"736m","memory":"1452Mi","pods":"11"}, incompatible requirements, label "project" does not have known values; incompatible with provisioner "karpenter-dev-provisioner-upgrade-test", daemonset overhead={"cpu":"736m","memory":"1452Mi","pods":"11"}, incompatible requirements, label "project" does not have known values; incompatible with provisioner "karpenter-dev-provisioner-core-services", daemonset overhead={"cpu":"736m","memory":"1452Mi","pods":"11"}, incompatible requirements, label "project" does not have known values {"commit": "30fa8f3-dirty", "pod": "preprod-asterisk/asterisk-0"} 2025-06-12T06:58:42.083Z ERROR controller.provisioner Could not schedule pod, incompatible with provisioner "karpenter-dev-provisioner-helper", daemonset overhead={"cpu":"736m","memory":"1452Mi","pods":"11"}, incompatible requirements, label "project" does not have known values; incompatible with provisioner "karpenter-dev-provisioner-primary", daemonset overhead={"cpu":"736m","memory":"1452Mi","pods":"11"}, incompatible requirements, key project, project In [upgrade-test] not in project In [elappnextgen]; incompatible with provisioner "karpenter-dev-provisioner-default", daemonset overhead={"cpu":"736m","memory":"1452Mi","pods":"11"}, incompatible requirements, label "project" does not have known values; incompatible with provisioner "karpenter-dev-provisioner-elappnextgen", daemonset overhead={"cpu":"736m","memory":"1452Mi","pods":"11"}, incompatible requirements, key project, project In [upgrade-test] not in project In [elappnextgen]; incompatible with provisioner "karpenter-dev-provisioner-on-demand", daemonset overhead={"cpu":"736m","memory":"1452Mi","pods":"11"}, incompatible requirements, label "project" does not have known values; incompatible with provisioner "karpenter-dev-provisioner-tiny", daemonset overhead={"cpu":"736m","memory":"1452Mi","pods":"11"}, incompatible requirements, label "project" does not have known values; incompatible with provisioner "karpenter-dev-provisioner-upgrade-test", daemonset overhead={"cpu":"736m","memory":"1452Mi","pods":"11"}, incompatible requirements, label "project" does not have known values; incompatible with provisioner "karpenter-dev-provisioner-core-services", daemonset overhead={"cpu":"736m","memory":"1452Mi","pods":"11"}, incompatible requirements, label "project" does not have known values {"commit": "30fa8f3-dirty", "pod": "upgrade-eks-123/busybox-deployment-745bdfc699-vwzzx"} 2025-06-12T06:58:52.074Z DEBUG controller.provisioner 80 out of 234 instance types were excluded because they would breach provisioner limits {"commit": "30fa8f3-dirty", "provisioner": "karpenter-dev-provisioner-tiny"} 2025-06-12T06:58:52.075Z DEBUG controller.provisioner 104 out of 234 instance types were excluded because they would breach provisioner limits {"commit": "30fa8f3-dirty", "provisioner": "karpenter-dev-provisioner-upgrade-test"} 2025-06-12T06:58:52.075Z DEBUG controller.provisioner 2 out of 234 instance types were excluded because they would breach provisioner limits {"commit": "30fa8f3-dirty", "provisioner": "karpenter-dev-provisioner-core-services"} 2025-06-12T06:58:52.079Z DEBUG controller.provisioner 80 out of 234 instance types were excluded because they would breach provisioner limits {"commit": "30fa8f3-dirty", "provisioner": "karpenter-dev-provisioner-tiny"} 2025-06-12T06:58:52.079Z DEBUG controller.provisioner 104 out of 234 instance types were excluded because they would breach provisioner limits {"commit": "30fa8f3-dirty", "provisioner": "karpenter-dev-provisioner-upgrade-test"} 2025-06-12T06:58:52.079Z DEBUG controller.provisioner 2 out of 234 instance types were excluded because they would breach provisioner limits {"commit": "30fa8f3-dirty", "provisioner": "karpenter-dev-provisioner-core-services"} 2025-06-12T06:58:52.081Z DEBUG controller.provisioner 80 out of 234 instance types were excluded because they would breach provisioner limits {"commit": "30fa8f3-dirty", "provisioner": "karpenter-dev-provisioner-tiny"} 2025-06-1
Provisioners setup:
#OLD PROVISIONERS
requirements:
- key: karpenter.sh/capacity-type
operator: In
values:
- spot
- key: kubernetes.io/arch
operator: In
values:
- amd64
# - key: karpenter.k8s.aws/instance-cpu
# operator: Lt
# values:
# - "33"
#- key: node.kubernetes.io/instance-type
# operator: In
# values: ["m5.2xlarge", "t3.2xlarge"]
- key: kubernetes.io/os
operator: In
values:
- linux
- key: karpenter.k8s.aws/instance-family
operator: NotIn
values:
- a1
- ci
- c3
- c4
- inf1
- key: "topology.kubernetes.io/zone"
operator: In
values:
- "me-south-1a"
limits:
resources:
cpu: 2k
memory: 3200Gi
labels:
kubernetes.io/os: linux
node: karpenter
consolidation:
enabled: true
kubeletConfiguration:
maxPods: 110
provider:
launchTemplate: karpenter-default-v2-provisioner
subnetSelector:
cluster_name: "xxx-dev-cluster"
weight: 1
secondProvisioner:
name: elappnextgen
launchTemplate: karpenter-elappnextgen-provisioner
labels:
project: elappnextgen
##New provisioners
new_provisioners:
#Values for default provisioner
first:
name: primary
providerRef:
name: template-126
requirements:
- key: karpenter.sh/capacity-type
operator: In
values:
- spot
- key: kubernetes.io/arch
operator: In
values:
- amd64
- key: karpenter.k8s.aws/instance-family
operator: NotIn
values:
- a1
- ci
- c3
- c4
- inf1
- key: karpenter.k8s.aws/instance-cpu
operator: Gt
values: ["4"]
- key: karpenter.k8s.aws/instance-memory
operator: Gt
values: ["16000"]
- key: topology.kubernetes.io/zone
operator: In
values:
- me-south-1a
- me-south-1c
- key: kubernetes.io/os
operator: In
values:
- linux
limits:
resources:
cpu: 2k
memory: 3200Gi
labels:
kubernetes.io/os: linux
node: karpenter
project: elappnextgen
consolidation:
enabled: true
kubeletConfiguration:
systemReserved:
cpu: 300m
memory: 300Mi
ephemeral-storage: 4Gi
kubeReserved:
cpu: 300m
memory: 300Mi
evictionHard:
memory.available: 3%
nodefs.available: 10%
nodefs.inodesFree: 10%
evictionSoft:
memory.available: 300Mi
nodefs.available: 15%
nodefs.inodesFree: 15%
evictionSoftGracePeriod:
memory.available: 2m0s
nodefs.available: 2m0s
nodefs.inodesFree: 2m0s
evictionMaxPodGracePeriod: 60
imageGCHighThresholdPercent: 75
imageGCLowThresholdPercent: 70
cpuCFSQuota: true
maxPods: 100
weight: 100
#Values for CORE-services provisioner
second:
name: core-services
providerRef:
name: template-126
taints:
- key: component=infracore
effect: NoSchedule
requirements:
- key: karpenter.sh/capacity-type
operator: In
values:
- spot
- key: kubernetes.io/arch
operator: In
values:
- amd64
- key: kubernetes.io/os
operator: In
values:
- linux
- key: karpenter.k8s.aws/instance-cpu
operator: Gt
values: ["6"]
- key: karpenter.k8s.aws/instance-network-bandwidth
operator: Lt
values: ["2000"]
- key: karpenter.k8s.aws/instance-family
operator: NotIn
values:
- a1
- ci
- c3
- c4
- inf1
- key: topology.kubernetes.io/zone
operator: In
values:
- me-south-1a
limits:
resources:
cpu: 2k
memory: 1600Gi
labels:
kubernetes.io/os: linux
node: karpenter
consolidation:
enabled: false
ttlSecondsAfterEmpty: 3600
kubeletConfiguration:
systemReserved:
cpu: 300m
memory: 300Mi
ephemeral-storage: 4Gi
kubeReserved:
cpu: 300m
memory: 300Mi
evictionHard:
memory.available: 3%
nodefs.available: 10%
nodefs.inodesFree: 10%
evictionSoft:
memory.available: 400Mi
nodefs.available: 15%
nodefs.inodesFree: 15%
evictionSoftGracePeriod:
memory.available: 5m0s
nodefs.available: 5m0s
nodefs.inodesFree: 5m0s
evictionMaxPodGracePeriod: 60
imageGCHighThresholdPercent: 85
imageGCLowThresholdPercent: 80
cpuCFSQuota: true
maxPods: 20
#Values for tiny provisioner
third:
name: tiny
providerRef:
name: tiny
taints:
- key: project=tiny
effect: NoSchedule
requirements:
- key: karpenter.sh/capacity-type
operator: In
values:
- spot
- key: kubernetes.io/arch
operator: In
values:
- amd64
- key: karpenter.k8s.aws/instance-cpu
operator: Lt
values: ["4"]
- key: karpenter.k8s.aws/instance-family
operator: NotIn
values:
- a1
- ci
- c3
- c4
- inf1
- key: topology.kubernetes.io/zone
operator: In
values:
- me-south-1a
limits:
resources:
cpu: '100'
memory: 160Gi
labels:
kubernetes.io/os: linux
node: karpenter
consolidation:
enabled: true
ttlSecondsAfterEmpty: 3600
kubeletConfiguration:
systemReserved:
cpu: 100m
memory: 100Mi
ephemeral-storage: 2Gi
kubeReserved:
cpu: 200m
memory: 200Mi
ephemeral-storage: 4Gi
evictionHard:
memory.available: 3%
nodefs.available: 10%
nodefs.inodesFree: 10%
evictionSoft:
memory.available: 300Mi
nodefs.available: 15%
nodefs.inodesFree: 15%
evictionSoftGracePeriod:
memory.available: 2m0s
nodefs.available: 2m0s
nodefs.inodesFree: 2m0s
evictionMaxPodGracePeriod: 60
imageGCHighThresholdPercent: 85
imageGCLowThresholdPercent: 80
cpuCFSQuota: true
podsPerCore: 2
maxPods: 20
#Values for on-demand provisioner
forth:
name: on-demand
providerRef:
name: template-126
requirements:
- key: karpenter.sh/capacity-type
operator: In
values:
- on-demand
- key: kubernetes.io/arch
operator: In
values:
- amd64
- key: karpenter.k8s.aws/instance-cpu
operator: Gt
values: ["4"]
- key: karpenter.k8s.aws/instance-family
operator: NotIn
values:
- a1
- ci
- c3
- c4
- inf1
- key: topology.kubernetes.io/zone
operator: In
values:
- me-south-1a
- key: kubernetes.io/os
operator: In
values:
- linux
limits:
resources:
cpu: 2k
memory: 3200Gi
labels:
kubernetes.io/os: linux
node: karpenter
consolidation:
enabled: true
ttlSecondsAfterEmpty: 3600
kubeletConfiguration:
systemReserved:
cpu: 100m
memory: 100Mi
ephemeral-storage: 2Gi
kubeReserved:
cpu: 200m
memory: 200Mi
ephemeral-storage: 4Gi
evictionHard:
memory.available: 3%
nodefs.available: 10%
nodefs.inodesFree: 10%
evictionSoft:
memory.available: 300Mi
nodefs.available: 15%
nodefs.inodesFree: 15%
evictionSoftGracePeriod:
memory.available: 2m0s
nodefs.available: 2m0s
nodefs.inodesFree: 2m0s
evictionMaxPodGracePeriod: 60
imageGCHighThresholdPercent: 85
imageGCLowThresholdPercent: 80
cpuCFSQuota: true
podsPerCore: 2
maxPods: 20
fifth:
name: upgrade-test
providerRef:
name: template-126
requirements:
- key: karpenter.sh/capacity-type
operator: In
values:
- spot
- key: kubernetes.io/arch
operator: In
values:
- amd64
- key: karpenter.k8s.aws/instance-cpu
operator: Gt
values: ["4"]
- key: karpenter.k8s.aws/instance-family
operator: NotIn
values:
- a1
- ci
- c3
- c4
- inf1
- key: topology.kubernetes.io/zone
operator: In
values:
- me-south-1a
limits:
resources:
cpu: '100'
memory: 160Gi
labels:
kubernetes.io/os: linux
node: karpenter
consolidation:
enabled: true
ttlSecondsAfterEmpty: 3600
kubeletConfiguration:
systemReserved:
cpu: 100m
memory: 100Mi
ephemeral-storage: 2Gi
kubeReserved:
cpu: 200m
memory: 200Mi
ephemeral-storage: 4Gi
evictionHard:
memory.available: 3%
nodefs.available: 10%
nodefs.inodesFree: 10%
evictionSoft:
memory.available: 300Mi
nodefs.available: 15%
nodefs.inodesFree: 15%
evictionSoftGracePeriod:
memory.available: 2m0s
nodefs.available: 2m0s
nodefs.inodesFree: 2m0s
evictionMaxPodGracePeriod: 60
imageGCHighThresholdPercent: 85
imageGCLowThresholdPercent: 80
cpuCFSQuota: true
podsPerCore: 2
maxPods: 20
sixth:
name: helper
providerRef:
name: template-127
requirements:
- key: karpenter.sh/capacity-type
operator: In
values:
- on-demand
- key: kubernetes.io/arch
operator: In
values:
- amd64
- key: karpenter.k8s.aws/instance-cpu
operator: Gt
values: ["8"]
- key: karpenter.k8s.aws/instance-family
operator: NotIn
values:
- a1
- ci
- c3
- c4
- inf1
- key: topology.kubernetes.io/zone
operator: In
values:
- me-south-1a
- me-south-1b
- me-south-1c
- key: kubernetes.io/os
operator: In
values:
- linux
limits:
resources:
cpu: 2k
memory: 3200Gi
labels:
kubernetes.io/os: linux
node: karpenter
consolidation:
enabled: true
kubeletConfiguration:
systemReserved:
cpu: 300m
memory: 300Mi
ephemeral-storage: 4Gi
kubeReserved:
cpu: 300m
memory: 300Mi
evictionHard:
memory.available: 3%
nodefs.available: 10%
nodefs.inodesFree: 10%
evictionSoft:
memory.available: 300Mi
nodefs.available: 15%
nodefs.inodesFree: 15%
evictionSoftGracePeriod:
memory.available: 2m0s
nodefs.available: 2m0s
nodefs.inodesFree: 2m0s
evictionMaxPodGracePeriod: 60
imageGCHighThresholdPercent: 75
imageGCLowThresholdPercent: 70
cpuCFSQuota: true
maxPods: 100
weight: 100
values-dev.yaml
`configLogging:
loglevelWebhook: error
zapLoggerConfig: |
{
"level": "debug",
"development": false,
"disableStacktrace": true,
"disableCaller": true,
"sampling": {
"initial": 100,
"thereafter": 100
},
"outputPaths": ["stdout"],
"errorOutputPaths": ["stderr"],
"encoding": "console",
"encoderConfig": {
"timeKey": "time",
"levelKey": "level",
"nameKey": "logger",
"callerKey": "caller",
"messageKey": "message",
"stacktraceKey": "stacktrace",
"levelEncoder": "capital",
"timeEncoder": "iso8601"
}
}
karpenter:
controller:
env:
CLUSTER_NAME: xxx-dev-cluster
healthProbePort: "8081"
karpenterService: karpenter
kubernetesMinVersion: 1.19.0-0
metricsPort: "8000"
webhookPort: "8443"
image:
repository: public.ecr.aws/karpenter/controller
tag: v0.28.1
imagePullPolicy: IfNotPresent
nodeSelector:
kubernetes.io/os: linux
ports:
- name: http-metrics
port: 8000
protocol: TCP
targetPort: http-metrics
- name: https-webhook
port: 8443
protocol: TCP
targetPort: https-webhook
replicas: 1
type: ClusterIP
######################################################################################################
karpenterGlobalSettings:
awsClusterEndpoint: "https://D77A3C6C6E5D33B80E566312F8D6DF97.yl4.me-south-1.eks.amazonaws.com"
awsClusterName: "xxx-dev-cluster"
awsDefaultInstanceProfile: "KarpenterInstanceProfile-dev"
awsAccountId: "118389142306"
awsNodeGroup: "karpenter-01-spot"
awsKarpenterControllerRole: "karpenter-controller-role"
#######################################################################################################
awsEnableENILimitedPodDensity: "true"
awsEnablePodENI: "false"
awsInterruptionQueueName: ""
awsIsolatedVPC: "false"
awsNodeNameConvention: ip-name
awsVmMemoryOverheadPercent: "0.075"
batchIdleDuration: 1s
batchMaxDuration: 10s
featureGatesDriftEnabled: "false"
kubernetesClusterDomain: cluster.local`
IAM role is fine, having all permissions.