Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

More pod-affinity code cleanup and prepare for parallelization #29109

Merged
merged 1 commit into from
Jul 22, 2016

Conversation

wojtek-t
Copy link
Member

Ref #26144

@wojtek-t wojtek-t added the release-note-none Denotes a PR that doesn't merit a release note. label Jul 18, 2016
@k8s-github-robot k8s-github-robot added the size/L Denotes a PR that changes 100-499 lines, ignoring generated files. label Jul 18, 2016
}

// AnyPodMatchesPodAffinityTerm checks if any of given pods can match the specific podAffinityTerm.
func (checker *PodAffinityChecker) AnyPodMatchesPodAffinityTerm(pod *api.Pod, allPods []*api.Pod, node *api.Node, podAffinityTerm api.PodAffinityTerm) (bool, error) {
// TODO: Do we really need any pod matching, or all pods matching? I think the latter.
func (checker *PodAffinityChecker) AnyPodMatchesPodAffinityTerm(pod *api.Pod, allPods []*api.Pod, node *api.Node, podAffinityTerm api.PodAffinityTerm) (bool, bool, error) {
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This additionally returns number of pods that match the given term, to avoid checking it again in the next function.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

(Deleted earlier comment.) Ah, I think I understand. matchingPodExists means "matching pod exists anywhere" while the first return value means "matching pod exists on a node that matches the topology key" ?

If so, please add a comment like

// First return value indicates whether a matching pod exists on a node that matches the topology key,
// while the second return value indicates whether a matching pod exists anywhere.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes exactly - will extend the comment as you suggested.

}

// AnyPodMatchesPodAffinityTerm checks if any of given pods can match the specific podAffinityTerm.
func (checker *PodAffinityChecker) AnyPodMatchesPodAffinityTerm(pod *api.Pod, allPods []*api.Pod, node *api.Node, podAffinityTerm api.PodAffinityTerm) (bool, error) {
// TODO: Do we really need any pod matching, or all pods matching? I think the latter.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why do you think it should be all pods matching?

When called from NodeMatchesHardPodAffinity() it is enforcing a rule like "only put the new pod on a node that is already running a pod from the storage service, since the new pod needs very low-latency communication with the storage service." You don't need all the pods on the node to match, just one, for the node to be accepted.

When called from NodeMatchesHardPodAntiAffinity() it is enforcing a rule like like "do not put the new pod on a node that is already running (some pod that the new pod would conflict with)." You don't need all the pods on the node to match, just one, for the node to be rejected.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I definitely agree for the "Anti-Affinity" that it should be "some pod".

However, for "Affinity", I'm not that convinced. If this is the case for the service (that you described above), a request to a service may be redirected to any pod from it. So my feeling is that it should be "close to all pods".

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK, that's a valid point, let me refine my example.

  1. "Run on the same node" but where the two pods communicate via a hard-coded host port or have some out-of-band way to find each other and use localhost to communicate when they discover they share a node
  2. "Run in same zone" where we assume service affinity (Proposal: Service connection affinity #15675) has been implemented (service routes you to instance in your same zone, if one exists)

I don't understand the "close to all pods" use case. Can you give an example?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK, so what I had in mind is:

  • assume that we have a service and it has 2 underlying pods: 1 in zone A and 1 in zone B
  • then if the pod we are scheduling is going to talk to this service, its connections will be forwarded to random from those 2 pods (we don't have service affinity, right?)
  • then ideally, we would like to be close to both pods, because we don't know which one we will be talking to

However, in the example above, this is impossible, so maybe it is enough to actually be close to "any of those"?

@davidopp
Copy link
Member

Looks good except the few comments.

@davidopp davidopp added the lgtm "Looks good to me", indicates that a PR is ready to be merged. label Jul 22, 2016
@davidopp
Copy link
Member

LGTM

@k8s-bot
Copy link

k8s-bot commented Jul 22, 2016

GCE e2e build/test passed for commit fad876b.

@k8s-github-robot
Copy link

@k8s-bot test this [submit-queue is verifying that this PR is safe to merge]

@k8s-bot
Copy link

k8s-bot commented Jul 22, 2016

GCE e2e build/test passed for commit fad876b.

@k8s-github-robot
Copy link

Automatic merge from submit-queue

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lgtm "Looks good to me", indicates that a PR is ready to be merged. release-note-none Denotes a PR that doesn't merit a release note. size/L Denotes a PR that changes 100-499 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

5 participants