Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Even Pods Spread - 5. Priority Core #79063

Merged
merged 8 commits into from Jul 27, 2019

Conversation

@Huang-Wei
Copy link
Member

commented Jun 15, 2019

What type of PR is this?

/sig scheduling
/kind feature
/priority important-soon
/hold

/assign @bsalamat
/cc @krmayankk

What this PR does / why we need it:

This is the 5th PR of the "Even Pods Spread" KEP implementation. After this PR, users can run workloads using "hard/soft topologySpreadConstraints", and high priority Pod can preempt low priority one.

  • Define a new internal Priority to supports API spec whenUnsatisfiable: ScheduleAnyway
  • Core priority logic
  • Unit tests

Which issue(s) this PR fixes:

Part of #77284.

Special notes for your reviewer:

Does this PR introduce a user-facing change?:

(will document all the changes in one place)

NONE

@k8s-ci-robot k8s-ci-robot requested a review from krmayankk Jun 15, 2019

@Huang-Wei Huang-Wei referenced this pull request Jun 15, 2019

Open

Umbrella issue for Even Pods Spread [Alpha] #77284

6 of 9 tasks complete

@Huang-Wei Huang-Wei force-pushed the Huang-Wei:eps-priority branch from 98f256d to 09d5c1f Jun 15, 2019

@Huang-Wei Huang-Wei changed the title Even Pods Spread - 4. Priority Core Even Pods Spread - 5. Priority Core Jun 20, 2019

@Huang-Wei Huang-Wei force-pushed the Huang-Wei:eps-priority branch from 09d5c1f to 39b6ea1 Jul 3, 2019

@Huang-Wei

This comment has been minimized.

Copy link
Member Author

commented Jul 12, 2019

some discussion on the Normalize function: https://kubernetes.slack.com/archives/C09TP78DV/p1557473288102500

@ahg-g
Copy link
Member

left a comment

similar to the previous PRs, this looks good to me as an initial implementation.


// CalculateEvenPodsSpreadPriority computes a score by checking through the topologySpreadConstraints
// that are with WhenUnsatifiable=ScheduleAnyway (a.k.a soft constraint).
// For each node (not only "filtered" nodes by Predicates), it adds the number of matching pods

This comment has been minimized.

Copy link
@ahg-g

ahg-g Jul 18, 2019

Member

"not only "filtered" nodes by Predicates" seems to mean nodes that were filtered (i.e., excluded) by the predicates, which I think is the opposite of what you mean.

Just to make it less confusing, I would re-phrase: "(not only the nodes that passed the predicates)" or "(not only "filtered" nodes)"

This comment has been minimized.

Copy link
@Huang-Wei

Huang-Wei Jul 24, 2019

Author Member

SG - yes, I meant to say "all nodes" (not only the nodes that passed the predicates)

// which has the <topologyKey:value> pair present.
// Then the sumed "weight" are normalized to 0~10, and the node(s) with the highest score are
// the most preferred.
// Symmetry is not considered.

This comment has been minimized.

Copy link
@ahg-g

ahg-g Jul 18, 2019

Member

can you explain what do you mean by "symmetry" in this context?

This comment has been minimized.

Copy link
@Huang-Wei

Huang-Wei Jul 24, 2019

Author Member

It means we only weigh how incomingPod matches existingPod. And how existingPod matches incomingPod doesn't contribute to the final score. This is different with Affinity API.


func (t *topologySpreadConstrantsMap) initialize(pod *v1.Pod, nodes []*v1.Node) {
constraints := getSoftTopologySpreadConstraints(pod)
for _, node := range nodes {

This comment has been minimized.

Copy link
@ahg-g

ahg-g Jul 18, 2019

Member

should we run this in parallel?

This comment has been minimized.

Copy link
@Huang-Wei

Huang-Wei Jul 24, 2019

Author Member

We can.

This comment has been minimized.

Copy link
@Huang-Wei

Huang-Wei Jul 25, 2019

Author Member

Maybe not worth it due to the cost of lock. And here nodes are the filtered nodes.

type topologySpreadConstrantsMap struct {
// The first error that we faced.
firstError error
sync.Mutex

This comment has been minimized.

Copy link
@ahg-g

ahg-g Jul 18, 2019

Member

Just a small reminder to use the new ErrorChannel once you rebase instead of this.

This comment has been minimized.

Copy link
@alculquicondor

alculquicondor Jul 19, 2019

Member

Also, I don't see the need to have it in the struct, but rather just the method.

This comment has been minimized.

Copy link
@Huang-Wei

Huang-Wei Jul 25, 2019

Author Member

Updated.

}
}
}
workqueue.ParallelizeUntil(ctx, 16, len(allNodeNames), processNode)

This comment has been minimized.

Copy link
@ahg-g

ahg-g Jul 18, 2019

Member

How is this going to translate to framework plugins? I think you will need to run this in a pre-filter phase because it is the earliest extension point that does not iterate over nodes, which is not ideal because the pod may get rejected before we get to actually score the nodes and so that pre-computation will be wasted.

I think we should have a "pre-score" extension point that is similar to pre-filter semantically to address this issue.

for _, pair := range pairs {
t.topologyPairToNodeNames[pair] = append(t.topologyPairToNodeNames[pair], node.Name)
}
t.counts[node.Name] = new(int64)

This comment has been minimized.

Copy link
@ahg-g

ahg-g Jul 18, 2019

Member

I feel this is going to involved a lot of unnecessary allocations, how about we define a static array of ints, and use -1 to indicate the "nil" case?

This comment has been minimized.

Copy link
@Huang-Wei

Huang-Wei Jul 25, 2019

Author Member

A little background on using *int64 over int64 is b/c with *int64, we can leverage atomic.AddInt64 to eliminate lock. The cost of allocating int64 pointers (esp. it's an on demand initialization) outweighs the cost of locking/unlocking.

This comment has been minimized.

Copy link
@Huang-Wei

Huang-Wei Jul 26, 2019

Author Member

Nvm, in the latest version I almost rewrite the Priority function and adopted this suggestion.


// counts store the mapping from node name to so-far computed score of
// the node.
counts map[string]*int64

This comment has been minimized.

Copy link
@ahg-g

ahg-g Jul 18, 2019

Member

podCounts instead of counts?

// need to reverse b/c the more matching pods it has, the less qualified it is
// result[i].Score = schedulerapi.MaxPriority - int(fScore)
result[i].Score = int(fScore)
}

This comment has been minimized.

Copy link
@ahg-g

ahg-g Jul 18, 2019

Member

this we can run in parallel as well.

This comment has been minimized.

Copy link
@alculquicondor

alculquicondor Jul 19, 2019

Member

It seems like a not very expensive loop (we only iterate through nodes, not pods). It might require perf-testing.

This comment has been minimized.

Copy link
@ahg-g

ahg-g Jul 26, 2019

Member

Probably, it depends on scale and cluster state though. Not worth doing it now though, I agree.

"k8s.io/kubernetes/pkg/scheduler/algorithm/predicates"
schedulerapi "k8s.io/kubernetes/pkg/scheduler/api"
schedulernodeinfo "k8s.io/kubernetes/pkg/scheduler/nodeinfo"

This comment has been minimized.

Copy link
@alculquicondor

alculquicondor Jul 19, 2019

Member

remove empty line

This comment has been minimized.

Copy link
@Huang-Wei

Huang-Wei Jul 25, 2019

Author Member

It seems a convention to keep "k8s.io/klog" apart with other imports.

This comment has been minimized.

Copy link
@alculquicondor

alculquicondor Jul 25, 2019

Member

Uhm... that's a weird one. It seems to be the second block though.

Show resolved Hide resolved pkg/scheduler/algorithm/priorities/even_pods_spread.go Outdated
Show resolved Hide resolved pkg/scheduler/algorithm/priorities/even_pods_spread.go Outdated
type topologySpreadConstrantsMap struct {
// The first error that we faced.
firstError error
sync.Mutex

This comment has been minimized.

Copy link
@alculquicondor

alculquicondor Jul 19, 2019

Member

Also, I don't see the need to have it in the struct, but rather just the method.

Show resolved Hide resolved pkg/scheduler/algorithm/priorities/even_pods_spread.go Outdated
}
matchCount := 0
for _, existingPod := range nodeInfo.Pods() {
match, err := predicates.PodMatchesAllSpreadConstraints(existingPod, pod.Namespace, constraints)

This comment has been minimized.

Copy link
@alculquicondor

alculquicondor Jul 19, 2019

Member

Reminder of pending discussion in PR 2

This comment has been minimized.

Copy link
@Huang-Wei

Huang-Wei Jul 26, 2019

Author Member

Updated to respect the individual constraint independently.

// that are with WhenUnsatifiable=ScheduleAnyway (a.k.a soft constraint).
// For each node (not only "filtered" nodes by Predicates), it adds the number of matching pods
// (all topologySpreadConstraints must be satified) as a "weight" to any "filtered" node
// which has the <topologyKey:value> pair present.

This comment has been minimized.

Copy link
@alculquicondor

alculquicondor Jul 19, 2019

Member

I think this comment could start with a more high level (and more readable) sentence. What we are trying to say is that, for each node, we want to sum up the number of pods in all the nodes in its same topology domain. But also mention that we reverse that number? (total-counts[node])

t.counts[node.Name] = &count
}
// calculate final priority score for each node
// TODO(Huang-Wei): in alpha version, we keep the formula as simple as possible.

This comment has been minimized.

Copy link
@alculquicondor

alculquicondor Jul 19, 2019

Member

Consider creating an issue for this

This comment has been minimized.

Copy link
@alculquicondor

alculquicondor Jul 25, 2019

Member

In my mind, if the node satisfy the constraints as if they were hard, it's priority should be schedulerapi.MaxPriority.

This comment has been minimized.

Copy link
@Huang-Wei

Huang-Wei Jul 26, 2019

Author Member

In that case, we have to check the formula which is implemented in Predicates again, that calculation effort isn't trivial. IMO in terms of Priority, it's not worth that way. The current algorithm also can rank the nodes well.

This comment has been minimized.

Copy link
@alculquicondor

alculquicondor Jul 26, 2019

Member

We should be able to reuse calculations once we move to the framework. Can you open a tracking bug to consider a new formula?

This comment has been minimized.

Copy link
@ahg-g

ahg-g Jul 26, 2019

Member

we can't reuse the calculations because the predicate metadata is only computed for the hard constraints, not all constraints.

This comment has been minimized.

Copy link
@Huang-Wei

Huang-Wei Jul 26, 2019

Author Member

@ahg-g is right, they are separate constraints :)

This comment has been minimized.

Copy link
@alculquicondor

alculquicondor Jul 29, 2019

Member

I know... but the calculation is the same, if I'm not missing something. We should be able to iterate only once through all the nodes for all the constraints.

This comment has been minimized.

Copy link
@ahg-g

ahg-g Jul 29, 2019

Member

I don't think the calculation is the same, we don't iterate over the soft constraints now at all, and so they are not matched against the node or the pods on the nodes. If we do that, and the node gets filtered out before getting scored, that calculation becomes a waste.

Show resolved Hide resolved pkg/scheduler/algorithm/priorities/even_pods_spread.go Outdated
// need to reverse b/c the more matching pods it has, the less qualified it is
// result[i].Score = schedulerapi.MaxPriority - int(fScore)
result[i].Score = int(fScore)
}

This comment has been minimized.

Copy link
@alculquicondor

alculquicondor Jul 19, 2019

Member

It seems like a not very expensive loop (we only iterate through nodes, not pods). It might require perf-testing.

factory.InsertPredicateKeyToAlgorithmProviderMap(predicates.EvenPodsSpreadPred)
factory.RegisterFitPredicate(predicates.EvenPodsSpreadPred, predicates.EvenPodsSpreadPredicate)
// register priority
factory.InsertPriorityKeyToAlgorithmProviderMap(priorities.EvenPodsSpreadPriority)
factory.RegisterPriorityFunction(priorities.EvenPodsSpreadPriority, priorities.CalculateEvenPodsSpreadPriority, 1)

This comment has been minimized.

Copy link
@draveness

draveness Jul 21, 2019

Member

This function has already been deprecated, we should use map-reduce pattern instead of this. I created #80225 to remove this from the codebase.

// RegisterPriorityFunction registers a priority function with the algorithm registry. Returns the name,
// with which the function was registered.
// DEPRECATED
// Use Map-Reduce pattern for priority functions.

This comment has been minimized.

Copy link
@draveness

draveness Jul 26, 2019

Member

Friendly reminder on the usage of deprecated RegisterPriorityFunction. :)

This comment has been minimized.

Copy link
@Huang-Wei

Huang-Wei Jul 26, 2019

Author Member

#80225 needs further considerations. I will take another round of review later.

@alculquicondor alculquicondor referenced this pull request Jul 24, 2019

Closed

REQUEST: New membership for alculquicondor #1042

6 of 6 tasks complete

@Huang-Wei Huang-Wei force-pushed the Huang-Wei:eps-priority branch from 39b6ea1 to d36d59b Jul 25, 2019

@k8s-ci-robot

This comment has been minimized.

Copy link
Contributor

commented Jul 25, 2019

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: Huang-Wei

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@Huang-Wei Huang-Wei force-pushed the Huang-Wei:eps-priority branch from d36d59b to c1acef3 Jul 25, 2019

@Huang-Wei Huang-Wei force-pushed the Huang-Wei:eps-priority branch from de5b5cb to 32d329a Jul 26, 2019

// Note: Symmetry is not applicable. We only weigh how incomingPod matches existingPod.
// Whether existingPod matches incomingPod doesn't contribute to the final score.
// This is different with the Affinity API.
func CalculateEvenPodsSpreadPriority(pod *v1.Pod, nodeNameToInfo map[string]*schedulernodeinfo.NodeInfo, nodes []*v1.Node) (schedulerapi.HostPriorityList, error) {

This comment has been minimized.

Copy link
@draveness

draveness Jul 26, 2019

Member

Where do we call this priority function? This is the only place "CalculateEvenPodsSpreadPriority" string appears.

This comment has been minimized.

Copy link
@Huang-Wei

Huang-Wei Jul 26, 2019

Author Member

Good question.. I realized the issue only until I wrote the integration test (i.e. in PR6, you can see the fix - it's actually an typo in defaults.go...)

But as you raised this, to avoid confusion, let me move that fix in PR6 to this PR.

BTW: we can see the benefits of having different kinds of tests :)

This comment has been minimized.

Copy link
@Huang-Wei

Huang-Wei Jul 26, 2019

Author Member

Oh... In addition to that typo, I made another rebasing mistake..

This comment has been minimized.

Copy link
@Huang-Wei

Huang-Wei Jul 26, 2019

Author Member

Done. Squashed to commit "EvenPodsSpread: Define a new Priority".

Thanks for the catch! (Otherwise, the next PR is the only guard...)

To get the benefits of making each PR CI green, I paid way too much energy on rebasing...

This comment has been minimized.

Copy link
@draveness

draveness Jul 26, 2019

Member

Done. Squashed to commit "EvenPodsSpread: Define a new Priority".

Thanks for the catch! (Otherwise, the next PR is the only guard...)

To get the benefits of making each PR CI green, I paid way too much energy on rebasing...

The rebasing could cause tons of work. Thanks for breaking this into pieces.

@Huang-Wei Huang-Wei force-pushed the Huang-Wei:eps-priority branch from 32d329a to a5826a8 Jul 26, 2019

for _, constraint := range constraints {
if tpVal, ok := node.Labels[constraint.TopologyKey]; ok {
pair := topologyPair{key: constraint.TopologyKey, value: tpVal}
matchSum := atomic.LoadInt64(t.topologyPairToPodCounts[pair])

This comment has been minimized.

Copy link
@alculquicondor

alculquicondor Jul 26, 2019

Member

This is not running in parallel, so no need to use atomic. Same below.

This comment has been minimized.

Copy link
@Huang-Wei

Huang-Wei Jul 26, 2019

Author Member

This case is a little tricky b/c we need to concurrently compare with minCount, and it's a racing behavior. We may need to use atomic.CompareAndSwapInt64. Will give a try on this.

This comment has been minimized.

Copy link
@Huang-Wei

Huang-Wei Jul 27, 2019

Author Member

I did a test (gist) and seems parallel version doesn't help much.

t.counts[node.Name] = &count
}
// calculate final priority score for each node
// TODO(Huang-Wei): in alpha version, we keep the formula as simple as possible.

This comment has been minimized.

Copy link
@alculquicondor

alculquicondor Jul 26, 2019

Member

We should be able to reuse calculations once we move to the framework. Can you open a tracking bug to consider a new formula?

// And we add matchCount to <t.total> to reverse the final score later.
for _, nodeName := range t.topologyPairToNodeNames[pair] {
atomic.AddInt64(t.podCounts[nodeName], matchCount[tpKey])
atomic.AddInt64(&t.total, matchCount[tpKey])

This comment has been minimized.

Copy link
@alculquicondor

alculquicondor Jul 26, 2019

Member

Right, if you normalize between best and worst node, then it doesn't serve any purpose. But you are (now?) normalizing between 0 and best node.

// nodeNameToPodCounts is keyed with node name, and valued with the number of matching pods.
nodeNameToPodCounts map[string]int64
// topologyPairToPodCounts is keyed with topologyPair, and valued with the number of matching pods.
topologyPairToPodCounts map[topologyPair]*int64

This comment has been minimized.

Copy link
@ahg-g

ahg-g Jul 26, 2019

Member

Can't we use int64 instead of a pointer? line 131 would look like this:

if _, ok := t.topologyPairToPodCounts[pair]; !ok {
  continue;
}

and lines 75-77 can be changed to just:

t.topologyPairToPodCounts[pair] = 0

This comment has been minimized.

Copy link
@Huang-Wei

Huang-Wei Jul 26, 2019

Author Member

Unfortunately no. Golang doesn't support addressing a map value, i.e. if topologyPairToPodCounts is defined as map[topologyPair]int64, we can't operate &topologyPairToPodCounts[pair].

} else {
// For those nodes which don't have all required topologyKeys present, it's intentional to
// assign nodeNameToPodCounts[nodeName] as -1, so that we're able to score these nodes to 0 afterwards.
t.nodeNameToPodCounts[node.Name] = -1

This comment has been minimized.

Copy link
@ahg-g

ahg-g Jul 26, 2019

Member

same thing here, I know I suggested this before, but perhaps a better way to do this is to set the value to 0 for the nodes that we want to consider:

add the following after line 78

t.nodeNameToPodCounts[node.Name] = 0

and lines 159 - 161 can be:

if _, ok := t.nodeNameToPodCounts[node.Name]; !ok  {
  continue
}

and 195:

if _, ok := t.nodeNameToPodCounts[node.Name]; !ok  {

This comment has been minimized.

Copy link
@ahg-g

ahg-g Jul 26, 2019

Member

if we do the above, the whole initialization function can be simplified to (giving that we expect few constraints on a pod):

func (t *topologySpreadConstraintsMap) initialize(pod *v1.Pod, nodes []*v1.Node) {
	constraints := getSoftTopologySpreadConstraints(pod)
	for _, node := range nodes {
                if !predicates.NodeLabelsMatchSpreadConstraints(node.Labels, constraints) {
                       continue;
                }
		for _, constraint := range constraints {
			tpKey := constraint.TopologyKey
                        t.topologyPairToPodCounts[topologyPair{key: tpKey, value: node.Labels[tpKey]}] = 0
		}	
		t.nodeNameToPodCounts[node.Name] = 0
	}
}

This comment has been minimized.

Copy link
@Huang-Wei

Huang-Wei Jul 26, 2019

Author Member

Good point. Will update.


for _, constraint := range constraints {
pair := topologyPair{key: constraint.TopologyKey, value: node.Labels[constraint.TopologyKey]}
// If current topology pair is not associated with any "filtered" node,

This comment has been minimized.

Copy link
@ahg-g

ahg-g Jul 26, 2019

Member

to me "filtered node" means a node that was filtered out and not considered, perhaps we should use "node that passed all filters" or "candidate node"

This comment has been minimized.

Copy link
@Huang-Wei

Huang-Wei Jul 26, 2019

Author Member

"candidate node" sounds good.

// that are with WhenUnsatisfiable=ScheduleAnyway (a.k.a soft constraint).
// The function works as below:
// 1) In all nodes, calculate the number of pods which match <pod>'s soft topology spread constraints.
// 2) Sum up the number to each node in <nodes> which has corresponding topologyPair present.

This comment has been minimized.

Copy link
@ahg-g

ahg-g Jul 26, 2019

Member

did you mean "sum up the number of pods in each node in "?

This comment has been minimized.

Copy link
@ahg-g

ahg-g Jul 26, 2019

Member

perhaps rephrase the last part to "has all topologyKeys and matches affinity terms of the incoming pod"?

This comment has been minimized.

Copy link
@Huang-Wei

Huang-Wei Jul 26, 2019

Author Member

did you mean "sum up the number of pods in each node in "?

yes, I will rephrase to "sum up the number calculated in 1) to ..."

perhaps rephrase the last part to "has all topologyKeys and matches affinity terms of the incoming pod"?

Not exactly. It's been rephrased another way, please see if it makes sense.

// 3) Finally normalize the number to 0~10. The node with the highest score is the most preferred.
// Note: Symmetry is not applicable. We only weigh how incomingPod matches existingPod.
// Whether existingPod matches incomingPod doesn't contribute to the final score.
// This is different with the Affinity API.

This comment has been minimized.

Copy link
@ahg-g

ahg-g Jul 26, 2019

Member

s/with/from

t.counts[node.Name] = &count
}
// calculate final priority score for each node
// TODO(Huang-Wei): in alpha version, we keep the formula as simple as possible.

This comment has been minimized.

Copy link
@ahg-g

ahg-g Jul 26, 2019

Member

we can't reuse the calculations because the predicate metadata is only computed for the hard constraints, not all constraints.

// need to reverse b/c the more matching pods it has, the less qualified it is
// result[i].Score = schedulerapi.MaxPriority - int(fScore)
result[i].Score = int(fScore)
}

This comment has been minimized.

Copy link
@ahg-g

ahg-g Jul 26, 2019

Member

Probably, it depends on scale and cluster state though. Not worth doing it now though, I agree.

@Huang-Wei

This comment has been minimized.

Copy link
Member Author

commented Jul 27, 2019

Comments addressed. @ahg-g @alculquicondor PTAL.

@ahg-g

This comment has been minimized.

Copy link
Member

commented Jul 27, 2019

/lgtm

Congrats on landing this feature!

@k8s-ci-robot k8s-ci-robot added the lgtm label Jul 27, 2019

Huang-Wei added some commits May 8, 2019

EvenPodsSpread: weigh constraints individually
- update logic to weigh each constraint individually
- address comments and misc fixes

@Huang-Wei Huang-Wei force-pushed the Huang-Wei:eps-priority branch from bbd0339 to 755a311 Jul 27, 2019

@k8s-ci-robot k8s-ci-robot removed the lgtm label Jul 27, 2019

@Huang-Wei

This comment has been minimized.

Copy link
Member Author

commented Jul 27, 2019

Phew, one PR left. @ahg-g thanks for your reviews as usual.

BTW: I squashed and reordered the commits slightly. Will request a /lgtm later.

@Huang-Wei

This comment has been minimized.

Copy link
Member Author

commented Jul 27, 2019

@ahg-g @alculquicondor @draveness would you mind re-labeling /lgtm?

@draveness
Copy link
Member

left a comment

/lgtm

Thanks @Huang-Wei !

@Huang-Wei

This comment has been minimized.

Copy link
Member Author

commented Jul 27, 2019

/hold cancel

@fejta-bot

This comment has been minimized.

Copy link

commented Jul 27, 2019

This PR may require API review.

If so, when the changes are ready, complete the pre-review checklist and request an API review.

Status of requested reviews is tracked in the API Review project.

@k8s-ci-robot k8s-ci-robot merged commit b344fb0 into kubernetes:master Jul 27, 2019

23 checks passed

cla/linuxfoundation Huang-Wei authorized
Details
pull-kubernetes-bazel-build Job succeeded.
Details
pull-kubernetes-bazel-test Job succeeded.
Details
pull-kubernetes-conformance-image-test Skipped.
pull-kubernetes-cross Skipped.
pull-kubernetes-dependencies Job succeeded.
Details
pull-kubernetes-e2e-gce Job succeeded.
Details
pull-kubernetes-e2e-gce-100-performance Job succeeded.
Details
pull-kubernetes-e2e-gce-csi-serial Skipped.
pull-kubernetes-e2e-gce-device-plugin-gpu Job succeeded.
Details
pull-kubernetes-e2e-gce-iscsi Skipped.
pull-kubernetes-e2e-gce-iscsi-serial Skipped.
pull-kubernetes-e2e-gce-storage-slow Skipped.
pull-kubernetes-godeps Skipped.
pull-kubernetes-integration Job succeeded.
Details
pull-kubernetes-kubemark-e2e-gce-big Job succeeded.
Details
pull-kubernetes-local-e2e Skipped.
pull-kubernetes-node-e2e Job succeeded.
Details
pull-kubernetes-node-e2e-containerd Job succeeded.
Details
pull-kubernetes-typecheck Job succeeded.
Details
pull-kubernetes-verify Job succeeded.
Details
pull-publishing-bot-validate Skipped.
tide In merge pool.
Details

errCh := schedutil.NewErrorChannel()
ctx, cancel := context.WithCancel(context.Background())
processAllNode := func(i int) {

This comment has been minimized.

Copy link
@tedyu

tedyu Jul 27, 2019

Contributor

processNode seems to be better name.

This comment has been minimized.

Copy link
@Huang-Wei

Huang-Wei Jul 27, 2019

Author Member

That's the old name. Here I want to highlight that this function works for all nodes, instead of each candidate node which has passed Predicates check.

@Huang-Wei Huang-Wei deleted the Huang-Wei:eps-priority branch Jul 27, 2019

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.