Large deployments of K8s
For a large scaled deployments, consider the following configuration changes:
Tune ansible settings for
timeoutvars to fit large numbers of nodes being deployed.
foo_image_repovars to point to intranet registry.
download_localhost: true. See download modes for details.
retry_staggerglobal var as appropriate. It should provide sane load on a delegate (the first K8s master node) then retrying failed push or download operations.
Tune parameters for DNS related applications (dnsmasq daemon set, kubedns replication controller). Those are
dns_memory_requests. Please note that limits must always be greater than or equal to requests.
Tune CPU/memory limits and requests. Those are located in roles' defaults and named like
foo_cpu_requests. Note that 'Mi' memory units for K8s will be submitted as 'M', if applied for
docker run, and cpu K8s units will end up with the 'm' skipped for docker as well. This is required as docker does not understand k8s units well.
kubelet_status_update_frequencyto increase reliability of kubelet.
kube_controller_pod_eviction_timeoutfor better Kubernetes reliability. Check out Kubernetes Reliability
Tune network prefix sizes. Those are
Add calico-rr nodes if you are deploying with Calico or Canal. Nodes recover from host/network interruption much quicker with calico-rr. Note that calico-rr role must be on a host without kube-master or kube-node role (but etcd role is okay).
Check out the Inventory section of the Getting started guide for tips on creating a large scale Ansible inventory.
etcd_events_cluster_setup: truestore events in a separate dedicated etcd instance.
For example, when deploying 200 nodes, you may want to run ansible with
--timeout=600 and define the