-
Notifications
You must be signed in to change notification settings - Fork 263
RFC: Add a new option --scheduler-name for ignoring pods which should be h… #170
Conversation
xref #164 |
pkg/batchd/cache/cache.go
Outdated
@@ -88,6 +89,9 @@ func newSchedulerCache(config *rest.Config) *SchedulerCache { | |||
FilterFunc: func(obj interface{}) bool { | |||
switch t := obj.(type) { | |||
case *v1.Pod: | |||
if strings.Compare(obj.(*v1.Pod).Spec.SchedulerName, schedulerName) != 0 { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Only filter Pods with the same scheduler name may be not enough. It may cause kube-batchd
schedule pods to a node overload.
Such as some pods from another scheduler are running on a node, however, the node is idle for kube-batchd
due to kube-batchd
doesn't get those running pods info. When kube-batchd
try to schedule pods on that node, it may fail due to lack of resources.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
+1
We should watch 'pending & schedulerName' or 'running' pods.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
pkg/batchd/cache/cache.go
Outdated
@@ -88,6 +89,10 @@ func newSchedulerCache(config *rest.Config) *SchedulerCache { | |||
FilterFunc: func(obj interface{}) bool { | |||
switch t := obj.(type) { | |||
case *v1.Pod: | |||
pod := obj.(*v1.Pod) | |||
if strings.Compare(pod.Spec.SchedulerName, schedulerName) == 0 && pod.Status.Phase == v1.PodPending { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it also adds pending pods from other scheduler(nonTerminatedPod()
will return true)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ah I see, will fix it soon, thanks
@jinzhejz updated the condition, could you take a look? |
@mitake there is build error, could you take a look
And other looks good to me. |
@jinzhejz oops... really sorry for that, I'll fix it soon. |
…andled by the default scheduler This commit adds a new option --scheduler-name to kube-batchd. kube-batchd handles pods which have the name specified with the option in its v1.Pod.Spec.SchedulerName. The motivation is separating pods which should be controlled by kube-batchd from other pods which can be handled by the default scheduler.
@jinzhejz fixed the error, could you take a look? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lgtm
thanks, team; merged :) |
RFC: Add a new option --scheduler-name for ignoring pods which should be h…
RFC: Add a new option --scheduler-name for ignoring pods which should be h…
…andled by the default scheduler
This commit adds a new option --scheduler-name to
kube-batchd. kube-batchd handles pods which have the name specified
with the option in its v1.Pod.Spec.SchedulerName. The motivation is
separating pods which should be controlled by kube-batchd from other
pods which can be handled by the default scheduler.
I'm still not fully sure this change is valuable for kube-arbitrator's design. If I can have comments, I'm really glad.