You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Jan 11, 2023. It is now read-only.
This means clusters running with bigger VMs (e.g. Standard_DS14_v2 accepts 32 data disks) will fail to schedule more than 16 pods that have PDs.
Note:
The scheduler currently makes this filtering decision across all agents, and is not designed for non-uniform/mixed node/agent types. We can either go - during cluster provisioning - with MIN(allowed data disk per Agent) resulting in capacity loss or MAX(allowed data disk per agent) resulting in random errors. We will have to accept one solution until k8s takes into consideration node type (from Cloud Provider) during scheduling decisions.
The text was updated successfully, but these errors were encountered:
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contribution. Note that acs-engine is deprecated--see https://github.com/Azure/aks-engine instead.
The scheduler defaults/falls back to 16 as maximum allowed PD per agent.
Check: https://github.com/kubernetes/kubernetes/blob/master/plugin/pkg/scheduler/algorithmprovider/defaults/defaults.go#L39
and
https://github.com/kubernetes/kubernetes/blob/master/plugin/pkg/scheduler/algorithmprovider/defaults/defaults.go#L208
This means clusters running with bigger VMs (e.g. Standard_DS14_v2 accepts 32 data disks) will fail to schedule more than 16 pods that have PDs.
Note:
The scheduler currently makes this filtering decision across all agents, and is not designed for non-uniform/mixed node/agent types. We can either go - during cluster provisioning - with MIN(allowed data disk per Agent) resulting in capacity loss or MAX(allowed data disk per agent) resulting in random errors. We will have to accept one solution until k8s takes into consideration node type (from Cloud Provider) during scheduling decisions.
The text was updated successfully, but these errors were encountered: