Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix filter plugins are not been called during preemption #81876

Conversation

@wgliang
Copy link
Member

commented Aug 24, 2019

What type of PR is this?

Uncomment only one /kind <> line, hit enter to put that in a new line, and remove leading whitespaces from that line:

/kind bug

What this PR does / why we need it:

Which issue(s) this PR fixes:

Fixes #81866

Special notes for your reviewer:
@alculquicondor

Does this PR introduce a user-facing change?:

Fix filter plugins are not been called during preemption

Additional documentation e.g., KEPs (Kubernetes Enhancement Proposals), usage docs, etc.:

NONE
@wgliang

This comment has been minimized.

Copy link
Member Author

commented Aug 24, 2019

/sig scheduling
/priority important-soon

@wgliang wgliang force-pushed the wgliang:bugfix/filter-plugins-are-not-been-called-during-preemption branch 2 times, most recently from 5a62ce6 to 8cc86ce Aug 24, 2019
Copy link
Member

left a comment

@@ -1121,7 +1136,7 @@ func selectVictimsOnNode(
// inter-pod affinity to one or more victims, but we have decided not to
// support this case for performance reasons. Having affinity to lower
// priority pods is not a recommended configuration anyway.
if fits, _, err := podFitsOnNode(pod, meta, nodeInfoCopy, fitPredicates, queue, false); !fits {
if fits, _, err, _ := podFitsOnNode(pluginContext, pod, meta, nodeInfoCopy, fitPredicates, queue, false, filterPluginsFunc); !fits {

This comment has been minimized.

Copy link
@draveness

draveness Aug 24, 2019

Member

I think the following is correct since the err is the last return value

Suggested change
if fits, _, err, _ := podFitsOnNode(pluginContext, pod, meta, nodeInfoCopy, fitPredicates, queue, false, filterPluginsFunc); !fits {
if fits, _, _, err := podFitsOnNode(pluginContext, pod, meta, nodeInfoCopy, fitPredicates, queue, false, filterPluginsFunc); !fits {

This comment has been minimized.

Copy link
@wgliang

wgliang Aug 24, 2019

Author Member

done

pod *v1.Pod,
meta predicates.PredicateMetadata,
info *schedulernodeinfo.NodeInfo,
predicateFuncs map[string]predicates.FitPredicate,
queue internalqueue.SchedulingQueue,
alwaysCheckAllPredicates bool,
) (bool, []predicates.PredicateFailureReason, error) {
filterPluginsFunc func(pc *framework.PluginContext, pod *v1.Pod, nodeName string) *framework.Status,

This comment has been minimized.

Copy link
@draveness

draveness Aug 24, 2019

Member

why are you using the function as a parameter? what about the framework

This comment has been minimized.

Copy link
@wgliang

wgliang Aug 25, 2019

Author Member

Thanks @draveness
IMO, from now on, it is sufficient to pass filterPluginsFunc, which seems to be better from the perspective of minimum function implementation and testing.

This comment has been minimized.

Copy link
@alculquicondor

alculquicondor Aug 26, 2019

Member

On the other hand, this approach has the following drawbacks:

  • Harder to read, IDEs can't find all the references to Framewokr.RunFilterPlugins
  • We have repeated code for the function prototype. Refactoring takes longer.

If we have a proper fake Framework, testing shouldn't be a problem. Also, we should prefer to test exported interfaces.

This comment has been minimized.

Copy link
@draveness

draveness Aug 26, 2019

Member

If we have a proper fake Framework, testing shouldn't be a problem. Also, we should prefer to test exported interfaces.

We do have a fakeFramework in scheduling_queue_test.go

This comment has been minimized.

Copy link
@ahg-g

ahg-g Aug 27, 2019

Member

I don't see a value in passing the function as a parameter, it is not easy to follow. Make podFitsOnNode, selectVictimsOnNode and selectNodesForPreemption private methods of genericScheduler, and that will give you access to the framework.

This comment has been minimized.

Copy link
@wgliang

wgliang Aug 28, 2019

Author Member

@ahg-g Agree with you, one more parameter is really unnecessary, whether it is framework or filterPluginsFunc.

nodeNameToInfo map[string]*schedulernodeinfo.NodeInfo,
potentialNodes []*v1.Node,
fitPredicates map[string]predicates.FitPredicate,
metadataProducer predicates.PredicateMetadataProducer,
queue internalqueue.SchedulingQueue,
pdbs []*policy.PodDisruptionBudget,
filterPluginsFunc func(pc *framework.PluginContext, pod *v1.Pod, nodeName string) *framework.Status,

This comment has been minimized.

Copy link
@draveness

draveness Aug 24, 2019

Member

same here

@@ -525,7 +515,15 @@ func (g *genericScheduler) findNodesThatFit(pluginContext *framework.PluginConte
}
} else {
predicateResultLock.Lock()
failedPredicateMap[nodeName] = failedPredicates
if status != nil && !status.IsSuccess() {

This comment has been minimized.

Copy link
@draveness

draveness Aug 24, 2019

Member

nit: !status.IsSuccess() won't panic even if the status is nil, please take a look at the implementation of IsSuccess function.

This comment has been minimized.

Copy link
@wgliang

wgliang Aug 25, 2019

Author Member

Sorry, I understand that you don't mean it. You mean that when the status is nil, calling status.IsSusccess() won't panic?
nil is untyped, my understanding will get an error similar to use of untyped nil panic error.

If my understanding is wrong, please let me know. Thanks. :)

This comment has been minimized.

Copy link
@draveness

draveness Aug 25, 2019

Member

Sorry, I understand that you don't mean it. You mean that when the status is nil, calling status.IsSusccess() won't panic?

Please take a look at https://github.com/kubernetes/kubernetes/pull/81876/files#diff-c237cdd9e4cb201118ca380732d7f361L509

This comment has been minimized.

Copy link
@draveness

draveness Aug 25, 2019

Member

It is a typed nil, I created a snippet here which shows it won't cause panic.

package main

import (
	"fmt"
)

type Status struct{}

func (s *Status) IsSuccess() bool {
	if s == nil {
		return true
	}
	return false
}

func main() {
	st := returnStatus()
	fmt.Println(st.IsSuccess())
}

func returnStatus() *Status {
	return nil
}
package main

import (
	"fmt"
)

type Status struct{}

func (s *Status) IsSuccess() bool {
	if s == nil {
		return true
	}
	return false
}

func main() {
	st := returnStatus()
	fmt.Println(st.IsSuccess())
}

func returnStatus() *Status {
	return nil
}

$ go run ...
true

This comment has been minimized.

Copy link
@wgliang

wgliang Aug 25, 2019

Author Member

cool! I got it.

@hex108

This comment has been minimized.

Copy link
Member

commented Aug 24, 2019

As mentioned in comment, how about converting podFitsOnNode to default filter plugin, then genericScheduler.Preempt calls framework.RunFilterPlugins instead of podFitsOnNode?

@draveness

This comment has been minimized.

Copy link
Member

commented Aug 24, 2019

As mentioned in comment, how about converting podFitsOnNode to default filter plugin, then genericScheduler.Preempt calls framework.RunFilterPlugins instead of podFitsOnNode?

There could be a lot of compatible work to do since they are using two different error systems, I have tried once but cause a lot of trouble which looks quite like a hack, and this PR is a fine approach to solve this issue IMO.

@draveness

This comment has been minimized.

Copy link
Member

commented Aug 24, 2019

@wgliang wgliang force-pushed the wgliang:bugfix/filter-plugins-are-not-been-called-during-preemption branch 2 times, most recently from c296483 to 7fb9e23 Aug 24, 2019
@@ -137,7 +137,7 @@ type ScheduleAlgorithm interface {
// the pod by preempting lower priority pods if possible.
// It returns the node where preemption happened, a list of preempted pods, a
// list of pods whose nominated node name should be removed, and error if any.
Preempt(*v1.Pod, error) (selectedNode *v1.Node, preemptedPods []*v1.Pod, cleanupNominatedPods []*v1.Pod, err error)
Preempt(*framework.PluginContext, *v1.Pod, error) (selectedNode *v1.Node, preemptedPods []*v1.Pod, cleanupNominatedPods []*v1.Pod, err error)

This comment has been minimized.

Copy link
@alculquicondor

alculquicondor Aug 26, 2019

Member

pod and then pluginContext? I prefer we are consistent with Schedule

This comment has been minimized.

Copy link
@draveness

draveness Aug 26, 2019

Member

I prefer the keep the context as the first argument, there is a chance it would conform context.Context.

This comment has been minimized.

Copy link
@wgliang

wgliang Aug 27, 2019

Author Member

@alculquicondor
FYI:https://golang.org/pkg/context/
The Context should be the first parameter

This comment has been minimized.

Copy link
@alculquicondor

alculquicondor Aug 27, 2019

Member

This is not currently an implementation of context.Context, so it doesn't apply. That said, we should consider rearranging the parameters in Schedule along with Preempt. But that shouldn't be in this PR.

This comment has been minimized.

Copy link
@ahg-g

ahg-g Aug 27, 2019

Member

+1 to Aldo's suggestion regarding rearranging the Schedule to be consistent with Preempt in a separate PR.

This comment has been minimized.

Copy link
@wgliang

wgliang Aug 28, 2019

Author Member

Ok. I will do that. :)

This comment has been minimized.

Copy link
@draveness

draveness Sep 6, 2019

Member

IMO we could accept this since we have already reached a consensus on conforming the Context in #81433 and #82072 is making the change.

pod *v1.Pod,
meta predicates.PredicateMetadata,
info *schedulernodeinfo.NodeInfo,
predicateFuncs map[string]predicates.FitPredicate,
queue internalqueue.SchedulingQueue,
alwaysCheckAllPredicates bool,
) (bool, []predicates.PredicateFailureReason, error) {
filterPluginsFunc func(pc *framework.PluginContext, pod *v1.Pod, nodeName string) *framework.Status,

This comment has been minimized.

Copy link
@alculquicondor

alculquicondor Aug 26, 2019

Member

On the other hand, this approach has the following drawbacks:

  • Harder to read, IDEs can't find all the references to Framewokr.RunFilterPlugins
  • We have repeated code for the function prototype. Refactoring takes longer.

If we have a proper fake Framework, testing shouldn't be a problem. Also, we should prefer to test exported interfaces.


// Iterate each plugin to verify current node
status := filterPluginsFunc(pluginContext, pod, info.Node().Name)
if !status.IsSuccess() {

This comment has been minimized.

Copy link
@alculquicondor

alculquicondor Aug 26, 2019

Member

This doesn't seem necessary. We can directly return, right?

This comment has been minimized.

Copy link
@draveness

draveness Aug 26, 2019

Member

It would be more clear to seperate the good and base cases IMO

This comment has been minimized.

Copy link
@alculquicondor

alculquicondor Aug 27, 2019

Member

the return value is the same, I don't see the point.

This comment has been minimized.

Copy link
@ahg-g

ahg-g Aug 27, 2019

Member

I don't see the need for the returned boolean, whether or not the pod fits on the node is easily determined by the other returned values, so I suggest we just return failedPredicates, status, nil here.

This comment has been minimized.

Copy link
@wgliang

wgliang Aug 28, 2019

Author Member

Thanks @ahg-g review. IMO, we need to return a bool type value to indicate whether the current node matches.

Because the results cannot be directly determined based on failedPredicates or err, this is because L663 and L681. Indeed, we can check both len(failedPredicates) and err, but this undoubtedly brings complexity to the judgment. We need more readable and easy to understand processing logic.

@@ -1121,7 +1136,7 @@ func selectVictimsOnNode(
// inter-pod affinity to one or more victims, but we have decided not to
// support this case for performance reasons. Having affinity to lower
// priority pods is not a recommended configuration anyway.
if fits, _, err := podFitsOnNode(pod, meta, nodeInfoCopy, fitPredicates, queue, false); !fits {
if fits, _, _, err := podFitsOnNode(pluginContext, pod, meta, nodeInfoCopy, fitPredicates, queue, false, filterPluginsFunc); !fits {
if err != nil {

This comment has been minimized.

Copy link
@alculquicondor

alculquicondor Aug 26, 2019

Member

what about status? That could be an error as well.

This comment has been minimized.

Copy link
@wgliang

wgliang Aug 28, 2019

Author Member

DONE

@@ -1330,7 +1330,8 @@ func TestSelectNodesForPreemption(t *testing.T) {
newnode := makeNode("newnode", 1000*5, priorityutil.DefaultMemoryRequest*5)
newnode.ObjectMeta.Labels = map[string]string{"hostname": "newnode"}
nodes = append(nodes, newnode)
nodeToPods, err := selectNodesForPreemption(test.pod, nodeNameToInfo, nodes, test.predicates, PredicateMetadata, nil, nil)
pluginContext := framework.NewPluginContext()
nodeToPods, err := selectNodesForPreemption(pluginContext, test.pod, nodeNameToInfo, nodes, test.predicates, PredicateMetadata, nil, nil, func(pc *framework.PluginContext, pod *v1.Pod, nodeName string) *framework.Status { return nil })

This comment has been minimized.

Copy link
@alculquicondor

alculquicondor Aug 26, 2019

Member

Can we add a test for what we are adding? preemption when the filter plugins keep filtering the nodes.

This comment has been minimized.

Copy link
@alculquicondor

alculquicondor Sep 3, 2019

Member

Did you work on this?

This comment has been minimized.

Copy link
@wgliang

wgliang Sep 4, 2019

Author Member

Sorry, I missed it. If I am not mistaken, the test case you are talking about is actually to confirm that the filter plugins are also executed when preemption occurs.

This does not seem to be a test case, but rather the number of times the filter plugins are executed.

I will check filterPlugin.numFilterCalled with the expected in TestSelectNodesForPreemption. What do you think of this?

This comment has been minimized.

Copy link
@alculquicondor

alculquicondor Sep 4, 2019

Member

that sounds good. But you could also check that a plugin responds Unschedulable in the preemption phase and then the node is not returned in the list.

This comment has been minimized.

Copy link
@wgliang

wgliang Sep 5, 2019

Author Member

DONE

@@ -349,14 +349,14 @@ func (sched *Scheduler) schedule(pod *v1.Pod, pluginContext *framework.PluginCon
// preempt tries to create room for a pod that has failed to schedule, by preempting lower priority pods if possible.
// If it succeeds, it adds the name of the node where preemption has happened to the pod spec.
// It returns the node name and an error if any.
func (sched *Scheduler) preempt(fwk framework.Framework, preemptor *v1.Pod, scheduleErr error) (string, error) {
func (sched *Scheduler) preempt(pluginContext *framework.PluginContext, fwk framework.Framework, preemptor *v1.Pod, scheduleErr error) (string, error) {

This comment has been minimized.

Copy link
@alculquicondor

alculquicondor Aug 26, 2019

Member

the framework should be available in sched.Framework. Pass pluginContext as second argument to for consistency with schedule

This comment has been minimized.

Copy link
@wgliang

wgliang Aug 27, 2019

Author Member

Like the reply above, I actually prefer to update pass pluginContext as first argument with schedule.

This comment has been minimized.

Copy link
@alculquicondor

alculquicondor Aug 27, 2019

Member

I don't disagree, but please do it in a separate PR.

This comment has been minimized.

Copy link
@wgliang

wgliang Aug 27, 2019

Author Member

Sure :)

This comment has been minimized.

Copy link
@alculquicondor

alculquicondor Sep 11, 2019

Member

FYI, this is being handled in #82072

This comment has been minimized.

Copy link
@ahg-g

ahg-g Sep 11, 2019

Member

We should do this in a separate small PR because we don't know if #82072 will move forward or not, and if yes in what form, so it may take a little while to merge.

Copy link
Member

left a comment

Thanks for sending this PR, I took a quick look, I will take another look once the comments are addressed.

failedPredicateMap[nodeName] = failedPredicates
if !status.IsSuccess() {
filteredNodesStatuses[nodeName] = status
if status.Code() != framework.Unschedulable {

This comment has been minimized.

Copy link
@ahg-g

ahg-g Aug 27, 2019

Member

podFitsOnNode should return an error if this happens, so we shouldn't check here again.

This comment has been minimized.

Copy link
@ahg-g

ahg-g Aug 28, 2019

Member

this is not addressed yet.

This comment has been minimized.

Copy link
@wgliang

wgliang Aug 29, 2019

Author Member

DONE,Thanks!

pod *v1.Pod,
meta predicates.PredicateMetadata,
info *schedulernodeinfo.NodeInfo,
predicateFuncs map[string]predicates.FitPredicate,
queue internalqueue.SchedulingQueue,
alwaysCheckAllPredicates bool,
) (bool, []predicates.PredicateFailureReason, error) {
filterPluginsFunc func(pc *framework.PluginContext, pod *v1.Pod, nodeName string) *framework.Status,

This comment has been minimized.

Copy link
@ahg-g

ahg-g Aug 27, 2019

Member

I don't see a value in passing the function as a parameter, it is not easy to follow. Make podFitsOnNode, selectVictimsOnNode and selectNodesForPreemption private methods of genericScheduler, and that will give you access to the framework.

@@ -137,7 +137,7 @@ type ScheduleAlgorithm interface {
// the pod by preempting lower priority pods if possible.
// It returns the node where preemption happened, a list of preempted pods, a
// list of pods whose nominated node name should be removed, and error if any.
Preempt(*v1.Pod, error) (selectedNode *v1.Node, preemptedPods []*v1.Pod, cleanupNominatedPods []*v1.Pod, err error)
Preempt(*framework.PluginContext, *v1.Pod, error) (selectedNode *v1.Node, preemptedPods []*v1.Pod, cleanupNominatedPods []*v1.Pod, err error)

This comment has been minimized.

Copy link
@ahg-g

ahg-g Aug 27, 2019

Member

+1 to Aldo's suggestion regarding rearranging the Schedule to be consistent with Preempt in a separate PR.

}

// Iterate each plugin to verify current node
status := filterPluginsFunc(pluginContext, pod, info.Node().Name)

This comment has been minimized.

Copy link
@ahg-g

ahg-g Aug 27, 2019

Member

we should run the filter plugins inside the for i := 0; i < 2; i++ { loop (at the end) so that they get examined with and without nominated pods.

Also, return error in case status is not one of the expected values (not Success or Unschedulable).

This comment has been minimized.

Copy link
@ahg-g

ahg-g Aug 27, 2019

Member

Another thing we need to think about is whether or not we should pass the alwaysCheckAllPredicates flag to RunFilterPlugins.

This comment has been minimized.

Copy link
@wgliang

wgliang Aug 28, 2019

Author Member

I suggest a separate PR to complete this.

This comment has been minimized.

Copy link
@ahg-g

ahg-g Aug 28, 2019

Member

We don't need a separate PR for the first comment, this is part of addressing the bug.

This comment has been minimized.

Copy link
@wgliang

wgliang Aug 29, 2019

Author Member

Yeah!


// Iterate each plugin to verify current node
status := filterPluginsFunc(pluginContext, pod, info.Node().Name)
if !status.IsSuccess() {

This comment has been minimized.

Copy link
@ahg-g

ahg-g Aug 27, 2019

Member

I don't see the need for the returned boolean, whether or not the pod fits on the node is easily determined by the other returned values, so I suggest we just return failedPredicates, status, nil here.

@wgliang wgliang force-pushed the wgliang:bugfix/filter-plugins-are-not-been-called-during-preemption branch 2 times, most recently from cd2333d to f078b59 Aug 28, 2019

// Iterate each plugin to verify current node
status = g.framework.RunFilterPlugins(pluginContext, pod, info.Node().Name)
if !status.IsSuccess() && status.Code() != framework.Unschedulable {

This comment has been minimized.

Copy link
@alculquicondor

alculquicondor Aug 29, 2019

Member

use status.IsUnschedulable or even, what about simply if status.Code() == framework.Error?

This comment has been minimized.

Copy link
@ahg-g

ahg-g Aug 29, 2019

Member

we should use !status.IsUnschedulable() not status.Code() == framework.Error because Success and Unschedulable* are the only two valid statues that we expect from filters, all other statuses should be considered errors.

if err != nil {
klog.Warningf("Encountered error while selecting victims on node %v: %v", nodeInfo.Node().Name, err)
}

if !status.IsSuccess() {

This comment has been minimized.

Copy link
@alculquicondor

alculquicondor Aug 29, 2019

Member

It looks like you already handle it in podFitsOnNode, when you return status.AsError()

This comment has been minimized.

Copy link
@wgliang

wgliang Aug 30, 2019

Author Member

There may be unschedulable here.

This comment has been minimized.

Copy link
@alculquicondor

alculquicondor Aug 30, 2019

Member

My understanding is that these warnings are for actual errors. Also, if the filters failed, this conditional and the one about would both produce the same output, which is bad for troubleshooting when looking at logs.

This comment has been minimized.

Copy link
@ahg-g

ahg-g Aug 30, 2019

Member

I agree, we shouldn't have this warning.

@@ -653,7 +653,7 @@ func TestGenericScheduler(t *testing.T) {
schedulerapi.DefaultPercentageOfNodesToScore,
false)
result, err := scheduler.Schedule(test.pod, framework.NewPluginContext())
if !reflect.DeepEqual(err, test.wErr) {
if err != nil && !reflect.DeepEqual(err, test.wErr) {

This comment has been minimized.

Copy link
@alculquicondor

alculquicondor Aug 29, 2019

Member

why is this change necessary?

This comment has been minimized.

Copy link
@wgliang

wgliang Aug 30, 2019

Author Member

Because the returned err may be nil, otherwise reflect.DeepEqual(err, test.wErr) in test will panic.

This comment has been minimized.

Copy link
@mrkm4ntr

mrkm4ntr Aug 30, 2019

Member

reflect.DeepEqual(err, test.wErr) in test will panic

No, it won't panic.
https://play.golang.org/p/nLXk5MDyc7k

This comment has been minimized.

Copy link
@wgliang

wgliang Aug 30, 2019

Author Member

In this line of code, I did get a null pointer error in my development environment, which is why I originally made this change. But I found that it doesn't exist now, so weird.

I have cancelled this change, thanks for all of you reminding me.

@@ -1307,6 +1307,9 @@ func TestSelectNodesForPreemption(t *testing.T) {
labelKeys := []string{"hostname", "zone", "region"}
for _, test := range tests {
t.Run(test.name, func(t *testing.T) {
g := &genericScheduler{
framework: emptyFramework,

This comment has been minimized.

Copy link
@alculquicondor

alculquicondor Aug 29, 2019

Member

Can we add some tests with a non-empty framework? Basically, to have a regression test.

This comment has been minimized.

Copy link
@wgliang

wgliang Aug 30, 2019

Author Member

TestSelectNodesForPreemption mainly tests selectNodesForPreemption, so the empty framework does not seem to be necessary. However, I have changed to a non-empty framework. If I understand your comment, please let me know, thanks. :)


// Iterate each plugin to verify current node
status = g.framework.RunFilterPlugins(pluginContext, pod, info.Node().Name)
if !status.IsSuccess() && status.Code() != framework.Unschedulable {

This comment has been minimized.

Copy link
@ahg-g

ahg-g Aug 29, 2019

Member

we should use !status.IsUnschedulable() not status.Code() == framework.Error because Success and Unschedulable* are the only two valid statues that we expect from filters, all other statuses should be considered errors.

}

return len(failedPredicates) == 0, failedPredicates, nil
return true, failedPredicates, status, nil

This comment has been minimized.

Copy link
@ahg-g

ahg-g Aug 29, 2019

Member

lines 682-686 should be replaced with:

fit := len(failedPredicates) == 0 && status.IsSuccess()
return fit, failedPredicates, status, nil

This comment has been minimized.

Copy link
@wgliang

wgliang Aug 30, 2019

Author Member

SGTM, DONE thanks!

@@ -676,9 +671,19 @@ func podFitsOnNode(
}
}
}

// Iterate each plugin to verify current node

This comment has been minimized.

Copy link
@ahg-g

ahg-g Aug 29, 2019

Member

please remove this comment, it does not read well.

This comment has been minimized.

Copy link
@wgliang

wgliang Aug 30, 2019

Author Member

DONE

filteredNodesStatuses[nodeName] = status
} else {
failedPredicateMap[nodeName] = failedPredicates
}

This comment has been minimized.

Copy link
@ahg-g

ahg-g Aug 29, 2019

Member

we should add both:

if !status.IsSuccess() {
  filteredNodesStatuses[nodeName] = status
} 
if len(failedPredicates) {
  failedPredicateMap[nodeName] = failedPredicates
}

This comment has been minimized.

Copy link
@wgliang

wgliang Aug 30, 2019

Author Member

Fixed

@ahg-g

This comment has been minimized.

Copy link
Member

commented Aug 30, 2019

@k82cn can you please add milestone 1.16 for this one as well? this is a bug fix.

@alculquicondor

This comment has been minimized.

Copy link
Member

commented Aug 30, 2019

You can rebase now

@wgliang wgliang force-pushed the wgliang:bugfix/filter-plugins-are-not-been-called-during-preemption branch from f340772 to 6da5f9e Aug 31, 2019
@wgliang

This comment has been minimized.

Copy link
Member Author

commented Aug 31, 2019

You can rebase now

Ok, done

@k82cn

This comment has been minimized.

Copy link
Member

commented Sep 2, 2019

/milestone v1.16

@k8s-ci-robot k8s-ci-robot added this to the v1.16 milestone Sep 2, 2019
@alculquicondor

This comment has been minimized.

Copy link
Member

commented Sep 3, 2019

Implementation LGTM. Let's just add the regression test.

@wgliang wgliang force-pushed the wgliang:bugfix/filter-plugins-are-not-been-called-during-preemption branch from 6da5f9e to 892ddf5 Sep 5, 2019
@alculquicondor

This comment has been minimized.

Copy link
Member

commented Sep 5, 2019

/lgtm

@@ -1119,14 +1119,31 @@ var startTime20190107 = metav1.Date(2019, 1, 7, 1, 1, 1, 0, time.UTC)
// that podsFitsOnNode works correctly and is tested separately.
func TestSelectNodesForPreemption(t *testing.T) {
defer algorithmpredicates.SetPredicatesOrderingDuringTest(order)()

filterPluginRegistry := framework.Registry{filterPlugin.Name(): NewFilterPlugin}

This comment has been minimized.

Copy link
@ahg-g

ahg-g Sep 5, 2019

Member

we faced problems in the past when using shared variables for the plugin instances, it would be great if we can change the way we instantiate plugins to be similar to what we did in integration tests:

define a generic function for instantiation:

// newPlugin returns a plugin factory with specified Plugin.
func newPlugin(plugin framework.Plugin) framework.PluginFactory {
	return func(_ *runtime.Unknown, fh framework.FrameworkHandle) (framework.Plugin, error) {
		return plugin, nil
	}
}

and here, you can just do:

filterPlugin := &FilterPlugin{}
	registry := framework.Registry{filterPluginName: newPlugin(filterPlugin)}

The above should allow you to remove the global instance filterPlugin

This comment has been minimized.

Copy link
@wgliang

wgliang Sep 6, 2019

Author Member

DONE

@guineveresaenger

This comment has been minimized.

Copy link
Contributor

commented Sep 5, 2019

@ahg-g @alculquicondor @wgliang

I'm the Release Lead Shadow for 1.16 release.

This is an outstanding 1.16 PR and needs to be merged before EOD Monday 9/9 in time for our rc cut.

It looks like according to @ahg-g 's review that this isn't actually marked lgtm? Can someone update with an expected time of completion? Thank you! ❤️

@alculquicondor

This comment has been minimized.

Copy link
Member

commented Sep 5, 2019

It looks like according to @ahg-g 's review that this isn't actually marked lgtm? Can someone update with an expected time of completion? Thank you!

Exactly, @wgliang is going to handle the latest minor suggestion

/lgtm cancel

@k8s-ci-robot k8s-ci-robot removed the lgtm label Sep 5, 2019
@guineveresaenger

This comment has been minimized.

Copy link
Contributor

commented Sep 5, 2019

do you have a reasonable notion of when thiswill be done, @alculquicondor? one thing that's missing is an Approver, it might be helpful to find someone so we can 🚢 it!

@ahg-g

This comment has been minimized.

Copy link
Member

commented Sep 5, 2019

I can approve it.

@wgliang

This comment has been minimized.

Copy link
Member Author

commented Sep 6, 2019

/retest

@wgliang wgliang force-pushed the wgliang:bugfix/filter-plugins-are-not-been-called-during-preemption branch from 892ddf5 to d84a75c Sep 6, 2019
@wgliang

This comment has been minimized.

Copy link
Member Author

commented Sep 6, 2019

@ahg-g @alculquicondor
Already updated all comments. Thank you for your review. :)

@ahg-g

This comment has been minimized.

Copy link
Member

commented Sep 6, 2019

/lgtm
/approve
Thanks @wgliang

@k8s-ci-robot

This comment has been minimized.

Copy link
Contributor

commented Sep 6, 2019

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: ahg-g, wgliang

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot k8s-ci-robot merged commit c58767b into kubernetes:master Sep 6, 2019
24 checks passed
24 checks passed
cla/linuxfoundation wgliang authorized
Details
pull-kubernetes-bazel-build Job succeeded.
Details
pull-kubernetes-bazel-test Job succeeded.
Details
pull-kubernetes-conformance-image-test Skipped.
pull-kubernetes-conformance-kind-ipv6 Skipped.
pull-kubernetes-cross Skipped.
pull-kubernetes-dependencies Job succeeded.
Details
pull-kubernetes-e2e-gce Job succeeded.
Details
pull-kubernetes-e2e-gce-100-performance Job succeeded.
Details
pull-kubernetes-e2e-gce-csi-serial Skipped.
pull-kubernetes-e2e-gce-device-plugin-gpu Job succeeded.
Details
pull-kubernetes-e2e-gce-iscsi Skipped.
pull-kubernetes-e2e-gce-iscsi-serial Skipped.
pull-kubernetes-e2e-gce-storage-slow Skipped.
pull-kubernetes-godeps Skipped.
pull-kubernetes-integration Job succeeded.
Details
pull-kubernetes-kubemark-e2e-gce-big Job succeeded.
Details
pull-kubernetes-local-e2e Skipped.
pull-kubernetes-node-e2e Job succeeded.
Details
pull-kubernetes-node-e2e-containerd Job succeeded.
Details
pull-kubernetes-typecheck Job succeeded.
Details
pull-kubernetes-verify Job succeeded.
Details
pull-publishing-bot-validate Skipped.
tide In merge pool.
Details
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.