-
Notifications
You must be signed in to change notification settings - Fork 4.2k
Closed as not planned
Closed as not planned
Copy link
Labels
area/cluster-autoscalerarea/provider/awsIssues or PRs related to aws providerIssues or PRs related to aws provider
Description
I have set up K8S cluster using EKS. CA has been configured to increase/decrease the number of nodes based on resources availability for pods. During scale-down, the CA terminates a node before moving pods in the node on another node. So, the pods get scheduled on another node after the node gets terminated. Hence, There is some downtime until the re-scheduled pods become healthy on another node.
How can I avoid the downtime by ensuring that the pods get scheduled on another node before the node gets terminated?
Deployment :
apiVersion: apps/v1
kind: Deployment
metadata:
name: cluster-autoscaler
namespace: kube-system
labels:
app: cluster-autoscaler
spec:
replicas: 1
selector:
matchLabels:
app: cluster-autoscaler
template:
metadata:
labels:
app: cluster-autoscaler
spec:
serviceAccountName: cluster-autoscaler
containers:
- image: k8s.gcr.io/cluster-autoscaler:v1.12.3
name: cluster-autoscaler
resources:
limits:
cpu: 100m
memory: 300Mi
requests:
cpu: 100m
memory: 300Mi
command:
- ./cluster-autoscaler
- --v=4
- --stderrthreshold=info
- --cloud-provider=aws
- --skip-nodes-with-local-storage=false
- --expander=least-waste
- --node-group-auto-discovery=asg:tag=k8s.io/cluster-autoscaler/enabled,k8s.io/cluster-autoscaler/production
- --balance-similar-node-groups=true
env:
- name: AWS_REGION
value: eu-central-1
volumeMounts:
- name: ssl-certs
mountPath: /etc/ssl/certs/ca-certificates.crt
readOnly: true
imagePullPolicy: "Always"
volumes:
- name: ssl-certs
hostPath:
path: "/etc/kubernetes/pki/ca.crt"
javiergoni, thstarshine, shivknight and sparta15
Metadata
Metadata
Assignees
Labels
area/cluster-autoscalerarea/provider/awsIssues or PRs related to aws providerIssues or PRs related to aws provider