-
Notifications
You must be signed in to change notification settings - Fork 39.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix filter plugins are not been called during preemption #81876
Fix filter plugins are not been called during preemption #81876
Conversation
/sig scheduling |
5a62ce6
to
8cc86ce
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
cc/ @alculquicondor
@@ -1121,7 +1136,7 @@ func selectVictimsOnNode( | |||
// inter-pod affinity to one or more victims, but we have decided not to | |||
// support this case for performance reasons. Having affinity to lower | |||
// priority pods is not a recommended configuration anyway. | |||
if fits, _, err := podFitsOnNode(pod, meta, nodeInfoCopy, fitPredicates, queue, false); !fits { | |||
if fits, _, err, _ := podFitsOnNode(pluginContext, pod, meta, nodeInfoCopy, fitPredicates, queue, false, filterPluginsFunc); !fits { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think the following is correct since the err is the last return value
if fits, _, err, _ := podFitsOnNode(pluginContext, pod, meta, nodeInfoCopy, fitPredicates, queue, false, filterPluginsFunc); !fits { | |
if fits, _, _, err := podFitsOnNode(pluginContext, pod, meta, nodeInfoCopy, fitPredicates, queue, false, filterPluginsFunc); !fits { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
pod *v1.Pod, | ||
meta predicates.PredicateMetadata, | ||
info *schedulernodeinfo.NodeInfo, | ||
predicateFuncs map[string]predicates.FitPredicate, | ||
queue internalqueue.SchedulingQueue, | ||
alwaysCheckAllPredicates bool, | ||
) (bool, []predicates.PredicateFailureReason, error) { | ||
filterPluginsFunc func(pc *framework.PluginContext, pod *v1.Pod, nodeName string) *framework.Status, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why are you using the function as a parameter? what about the framework
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @draveness
IMO, from now on, it is sufficient to pass filterPluginsFunc, which seems to be better from the perspective of minimum function implementation and testing.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
On the other hand, this approach has the following drawbacks:
- Harder to read, IDEs can't find all the references to Framewokr.RunFilterPlugins
- We have repeated code for the function prototype. Refactoring takes longer.
If we have a proper fake Framework, testing shouldn't be a problem. Also, we should prefer to test exported interfaces.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If we have a proper fake Framework, testing shouldn't be a problem. Also, we should prefer to test exported interfaces.
We do have a fakeFramework in scheduling_queue_test.go
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't see a value in passing the function as a parameter, it is not easy to follow. Make podFitsOnNode, selectVictimsOnNode and selectNodesForPreemption private methods of genericScheduler, and that will give you access to the framework.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@ahg-g Agree with you, one more parameter is really unnecessary, whether it is framework
or filterPluginsFunc
.
nodeNameToInfo map[string]*schedulernodeinfo.NodeInfo, | ||
potentialNodes []*v1.Node, | ||
fitPredicates map[string]predicates.FitPredicate, | ||
metadataProducer predicates.PredicateMetadataProducer, | ||
queue internalqueue.SchedulingQueue, | ||
pdbs []*policy.PodDisruptionBudget, | ||
filterPluginsFunc func(pc *framework.PluginContext, pod *v1.Pod, nodeName string) *framework.Status, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
same here
@@ -525,7 +515,15 @@ func (g *genericScheduler) findNodesThatFit(pluginContext *framework.PluginConte | |||
} | |||
} else { | |||
predicateResultLock.Lock() | |||
failedPredicateMap[nodeName] = failedPredicates | |||
if status != nil && !status.IsSuccess() { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: !status.IsSuccess()
won't panic even if the status is nil, please take a look at the implementation of IsSuccess
function.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry, I understand that you don't mean it. You mean that when the status is nil, calling status.IsSusccess()
won't panic?
nil
is untyped, my understanding will get an error similar to use of untyped nil
panic error.
If my understanding is wrong, please let me know. Thanks. :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry, I understand that you don't mean it. You mean that when the status is nil, calling
status.IsSusccess()
won't panic?
Please take a look at https://github.com/kubernetes/kubernetes/pull/81876/files#diff-c237cdd9e4cb201118ca380732d7f361L509
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It is a typed nil, I created a snippet here which shows it won't cause panic.
package main
import (
"fmt"
)
type Status struct{}
func (s *Status) IsSuccess() bool {
if s == nil {
return true
}
return false
}
func main() {
st := returnStatus()
fmt.Println(st.IsSuccess())
}
func returnStatus() *Status {
return nil
}
package main
import (
"fmt"
)
type Status struct{}
func (s *Status) IsSuccess() bool {
if s == nil {
return true
}
return false
}
func main() {
st := returnStatus()
fmt.Println(st.IsSuccess())
}
func returnStatus() *Status {
return nil
}
$ go run ...
true
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
cool! I got it.
As mentioned in comment, how about converting |
There could be a lot of compatible work to do since they are using two different error systems, I have tried once but cause a lot of trouble which looks quite like a hack, and this PR is a fine approach to solve this issue IMO. |
cc/ @alculquicondor |
c296483
to
7fb9e23
Compare
@@ -137,7 +137,7 @@ type ScheduleAlgorithm interface { | |||
// the pod by preempting lower priority pods if possible. | |||
// It returns the node where preemption happened, a list of preempted pods, a | |||
// list of pods whose nominated node name should be removed, and error if any. | |||
Preempt(*v1.Pod, error) (selectedNode *v1.Node, preemptedPods []*v1.Pod, cleanupNominatedPods []*v1.Pod, err error) | |||
Preempt(*framework.PluginContext, *v1.Pod, error) (selectedNode *v1.Node, preemptedPods []*v1.Pod, cleanupNominatedPods []*v1.Pod, err error) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
pod and then pluginContext? I prefer we are consistent with Schedule
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I prefer the keep the context as the first argument, there is a chance it would conform context.Context
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@alculquicondor
FYI:https://golang.org/pkg/context/
The Context should be the first parameter
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is not currently an implementation of context.Context, so it doesn't apply. That said, we should consider rearranging the parameters in Schedule along with Preempt. But that shouldn't be in this PR.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
+1 to Aldo's suggestion regarding rearranging the Schedule to be consistent with Preempt in a separate PR.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok. I will do that. :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
pod *v1.Pod, | ||
meta predicates.PredicateMetadata, | ||
info *schedulernodeinfo.NodeInfo, | ||
predicateFuncs map[string]predicates.FitPredicate, | ||
queue internalqueue.SchedulingQueue, | ||
alwaysCheckAllPredicates bool, | ||
) (bool, []predicates.PredicateFailureReason, error) { | ||
filterPluginsFunc func(pc *framework.PluginContext, pod *v1.Pod, nodeName string) *framework.Status, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
On the other hand, this approach has the following drawbacks:
- Harder to read, IDEs can't find all the references to Framewokr.RunFilterPlugins
- We have repeated code for the function prototype. Refactoring takes longer.
If we have a proper fake Framework, testing shouldn't be a problem. Also, we should prefer to test exported interfaces.
|
||
// Iterate each plugin to verify current node | ||
status := filterPluginsFunc(pluginContext, pod, info.Node().Name) | ||
if !status.IsSuccess() { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This doesn't seem necessary. We can directly return, right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It would be more clear to seperate the good and base cases IMO
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the return value is the same, I don't see the point.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't see the need for the returned boolean, whether or not the pod fits on the node is easily determined by the other returned values, so I suggest we just return failedPredicates, status, nil
here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @ahg-g review. IMO, we need to return a bool type value to indicate whether the current node matches.
Because the results cannot be directly determined based on failedPredicates or err, this is because L663 and L681. Indeed, we can check both len(failedPredicates) and err, but this undoubtedly brings complexity to the judgment. We need more readable and easy to understand processing logic.
@@ -1121,7 +1136,7 @@ func selectVictimsOnNode( | |||
// inter-pod affinity to one or more victims, but we have decided not to | |||
// support this case for performance reasons. Having affinity to lower | |||
// priority pods is not a recommended configuration anyway. | |||
if fits, _, err := podFitsOnNode(pod, meta, nodeInfoCopy, fitPredicates, queue, false); !fits { | |||
if fits, _, _, err := podFitsOnNode(pluginContext, pod, meta, nodeInfoCopy, fitPredicates, queue, false, filterPluginsFunc); !fits { | |||
if err != nil { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
what about status? That could be an error as well.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
DONE
@@ -1330,7 +1330,8 @@ func TestSelectNodesForPreemption(t *testing.T) { | |||
newnode := makeNode("newnode", 1000*5, priorityutil.DefaultMemoryRequest*5) | |||
newnode.ObjectMeta.Labels = map[string]string{"hostname": "newnode"} | |||
nodes = append(nodes, newnode) | |||
nodeToPods, err := selectNodesForPreemption(test.pod, nodeNameToInfo, nodes, test.predicates, PredicateMetadata, nil, nil) | |||
pluginContext := framework.NewPluginContext() | |||
nodeToPods, err := selectNodesForPreemption(pluginContext, test.pod, nodeNameToInfo, nodes, test.predicates, PredicateMetadata, nil, nil, func(pc *framework.PluginContext, pod *v1.Pod, nodeName string) *framework.Status { return nil }) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we add a test for what we are adding? preemption when the filter plugins keep filtering the nodes.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Did you work on this?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry, I missed it. If I am not mistaken, the test case you are talking about is actually to confirm that the filter plugins are also executed when preemption occurs.
This does not seem to be a test case, but rather the number of times the filter plugins are executed.
I will check filterPlugin.numFilterCalled with the expected in TestSelectNodesForPreemption. What do you think of this?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
that sounds good. But you could also check that a plugin responds Unschedulable in the preemption phase and then the node is not returned in the list.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
DONE
@@ -349,14 +349,14 @@ func (sched *Scheduler) schedule(pod *v1.Pod, pluginContext *framework.PluginCon | |||
// preempt tries to create room for a pod that has failed to schedule, by preempting lower priority pods if possible. | |||
// If it succeeds, it adds the name of the node where preemption has happened to the pod spec. | |||
// It returns the node name and an error if any. | |||
func (sched *Scheduler) preempt(fwk framework.Framework, preemptor *v1.Pod, scheduleErr error) (string, error) { | |||
func (sched *Scheduler) preempt(pluginContext *framework.PluginContext, fwk framework.Framework, preemptor *v1.Pod, scheduleErr error) (string, error) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the framework should be available in sched.Framework
. Pass pluginContext
as second argument to for consistency with schedule
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Like the reply above, I actually prefer to update pass pluginContext as first argument with schedule.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't disagree, but please do it in a separate PR.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sure :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
FYI, this is being handled in #82072
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should do this in a separate small PR because we don't know if #82072 will move forward or not, and if yes in what form, so it may take a little while to merge.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for sending this PR, I took a quick look, I will take another look once the comments are addressed.
failedPredicateMap[nodeName] = failedPredicates | ||
if !status.IsSuccess() { | ||
filteredNodesStatuses[nodeName] = status | ||
if status.Code() != framework.Unschedulable { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
podFitsOnNode should return an error if this happens, so we shouldn't check here again.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this is not addressed yet.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
DONE,Thanks!
pod *v1.Pod, | ||
meta predicates.PredicateMetadata, | ||
info *schedulernodeinfo.NodeInfo, | ||
predicateFuncs map[string]predicates.FitPredicate, | ||
queue internalqueue.SchedulingQueue, | ||
alwaysCheckAllPredicates bool, | ||
) (bool, []predicates.PredicateFailureReason, error) { | ||
filterPluginsFunc func(pc *framework.PluginContext, pod *v1.Pod, nodeName string) *framework.Status, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't see a value in passing the function as a parameter, it is not easy to follow. Make podFitsOnNode, selectVictimsOnNode and selectNodesForPreemption private methods of genericScheduler, and that will give you access to the framework.
@@ -137,7 +137,7 @@ type ScheduleAlgorithm interface { | |||
// the pod by preempting lower priority pods if possible. | |||
// It returns the node where preemption happened, a list of preempted pods, a | |||
// list of pods whose nominated node name should be removed, and error if any. | |||
Preempt(*v1.Pod, error) (selectedNode *v1.Node, preemptedPods []*v1.Pod, cleanupNominatedPods []*v1.Pod, err error) | |||
Preempt(*framework.PluginContext, *v1.Pod, error) (selectedNode *v1.Node, preemptedPods []*v1.Pod, cleanupNominatedPods []*v1.Pod, err error) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
+1 to Aldo's suggestion regarding rearranging the Schedule to be consistent with Preempt in a separate PR.
} | ||
|
||
// Iterate each plugin to verify current node | ||
status := filterPluginsFunc(pluginContext, pod, info.Node().Name) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we should run the filter plugins inside the for i := 0; i < 2; i++ {
loop (at the end) so that they get examined with and without nominated pods.
Also, return error in case status is not one of the expected values (not Success or Unschedulable).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Another thing we need to think about is whether or not we should pass the alwaysCheckAllPredicates flag to RunFilterPlugins.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I suggest a separate PR to complete this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We don't need a separate PR for the first comment, this is part of addressing the bug.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah!
|
||
// Iterate each plugin to verify current node | ||
status := filterPluginsFunc(pluginContext, pod, info.Node().Name) | ||
if !status.IsSuccess() { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't see the need for the returned boolean, whether or not the pod fits on the node is easily determined by the other returned values, so I suggest we just return failedPredicates, status, nil
here.
cd2333d
to
f078b59
Compare
|
||
// Iterate each plugin to verify current node | ||
status = g.framework.RunFilterPlugins(pluginContext, pod, info.Node().Name) | ||
if !status.IsSuccess() && status.Code() != framework.Unschedulable { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
use status.IsUnschedulable
or even, what about simply if status.Code() == framework.Error
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we should use !status.IsUnschedulable()
not status.Code() == framework.Error
because Success
and Unschedulable*
are the only two valid statues that we expect from filters, all other statuses should be considered errors.
if err != nil { | ||
klog.Warningf("Encountered error while selecting victims on node %v: %v", nodeInfo.Node().Name, err) | ||
} | ||
|
||
if !status.IsSuccess() { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It looks like you already handle it in podFitsOnNode
, when you return status.AsError()
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There may be unschedulable here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My understanding is that these warnings are for actual errors. Also, if the filters failed, this conditional and the one about would both produce the same output, which is bad for troubleshooting when looking at logs.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree, we shouldn't have this warning.
@@ -653,7 +653,7 @@ func TestGenericScheduler(t *testing.T) { | |||
schedulerapi.DefaultPercentageOfNodesToScore, | |||
false) | |||
result, err := scheduler.Schedule(test.pod, framework.NewPluginContext()) | |||
if !reflect.DeepEqual(err, test.wErr) { | |||
if err != nil && !reflect.DeepEqual(err, test.wErr) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why is this change necessary?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Because the returned err may be nil, otherwise reflect.DeepEqual(err, test.wErr) in test will panic.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
reflect.DeepEqual(err, test.wErr) in test will panic
No, it won't panic.
https://play.golang.org/p/nLXk5MDyc7k
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In this line of code, I did get a null pointer error in my development environment, which is why I originally made this change. But I found that it doesn't exist now, so weird.
I have cancelled this change, thanks for all of you reminding me.
@@ -1307,6 +1307,9 @@ func TestSelectNodesForPreemption(t *testing.T) { | |||
labelKeys := []string{"hostname", "zone", "region"} | |||
for _, test := range tests { | |||
t.Run(test.name, func(t *testing.T) { | |||
g := &genericScheduler{ | |||
framework: emptyFramework, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we add some tests with a non-empty framework? Basically, to have a regression test.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
TestSelectNodesForPreemption
mainly tests selectNodesForPreemption
, so the empty framework does not seem to be necessary. However, I have changed to a non-empty framework. If I understand your comment, please let me know, thanks. :)
|
||
// Iterate each plugin to verify current node | ||
status = g.framework.RunFilterPlugins(pluginContext, pod, info.Node().Name) | ||
if !status.IsSuccess() && status.Code() != framework.Unschedulable { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we should use !status.IsUnschedulable()
not status.Code() == framework.Error
because Success
and Unschedulable*
are the only two valid statues that we expect from filters, all other statuses should be considered errors.
} | ||
|
||
return len(failedPredicates) == 0, failedPredicates, nil | ||
return true, failedPredicates, status, nil |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lines 682-686 should be replaced with:
fit := len(failedPredicates) == 0 && status.IsSuccess()
return fit, failedPredicates, status, nil
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
SGTM, DONE thanks!
@@ -676,9 +671,19 @@ func podFitsOnNode( | |||
} | |||
} | |||
} | |||
|
|||
// Iterate each plugin to verify current node |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
please remove this comment, it does not read well.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
DONE
filteredNodesStatuses[nodeName] = status | ||
} else { | ||
failedPredicateMap[nodeName] = failedPredicates | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we should add both:
if !status.IsSuccess() {
filteredNodesStatuses[nodeName] = status
}
if len(failedPredicates) {
failedPredicateMap[nodeName] = failedPredicates
}
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fixed
@ahg-g: You must be a member of the kubernetes/milestone-maintainers GitHub team to set the milestone. If you believe you should be able to issue the /milestone command, please contact your and have them propose you as an additional delegate for this responsibility. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
@k82cn can you please add milestone 1.16 for this one as well? this is a bug fix. |
You can rebase now |
f340772
to
6da5f9e
Compare
Ok, done |
/milestone v1.16 |
Implementation LGTM. Let's just add the regression test. |
6da5f9e
to
892ddf5
Compare
/lgtm |
@@ -1119,14 +1119,31 @@ var startTime20190107 = metav1.Date(2019, 1, 7, 1, 1, 1, 0, time.UTC) | |||
// that podsFitsOnNode works correctly and is tested separately. | |||
func TestSelectNodesForPreemption(t *testing.T) { | |||
defer algorithmpredicates.SetPredicatesOrderingDuringTest(order)() | |||
|
|||
filterPluginRegistry := framework.Registry{filterPlugin.Name(): NewFilterPlugin} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we faced problems in the past when using shared variables for the plugin instances, it would be great if we can change the way we instantiate plugins to be similar to what we did in integration tests:
define a generic function for instantiation:
// newPlugin returns a plugin factory with specified Plugin.
func newPlugin(plugin framework.Plugin) framework.PluginFactory {
return func(_ *runtime.Unknown, fh framework.FrameworkHandle) (framework.Plugin, error) {
return plugin, nil
}
}
and here, you can just do:
filterPlugin := &FilterPlugin{}
registry := framework.Registry{filterPluginName: newPlugin(filterPlugin)}
The above should allow you to remove the global instance filterPlugin
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
DONE
@ahg-g @alculquicondor @wgliang I'm the Release Lead Shadow for 1.16 release. This is an outstanding 1.16 PR and needs to be merged before EOD Monday 9/9 in time for our rc cut. It looks like according to @ahg-g 's review that this isn't actually marked |
do you have a reasonable notion of when thiswill be done, @alculquicondor? one thing that's missing is an Approver, it might be helpful to find someone so we can 🚢 it! |
I can approve it. |
/retest |
892ddf5
to
d84a75c
Compare
@ahg-g @alculquicondor |
/lgtm |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: ahg-g, wgliang The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
What type of PR is this?
/kind bug
What this PR does / why we need it:
Which issue(s) this PR fixes:
Fixes #81866
Special notes for your reviewer:
@alculquicondor
Does this PR introduce a user-facing change?:
Additional documentation e.g., KEPs (Kubernetes Enhancement Proposals), usage docs, etc.: