-
Notifications
You must be signed in to change notification settings - Fork 38.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Update TopologyManager algorithm for selecting "best" non-preferred hint #108154
Conversation
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: klueska The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
/sig node |
/triage accepted |
Unknown CLA label state. Rechecking for CLA labels. Send feedback to sig-contributor-experience at kubernetes/community. /check-cla |
This PR significantly improves the selection process of the best hints in case of non-preferred hints when multiple NUMA nodes are required for the "best" topology alignment. We now have a way more intelligent selection process rather than selecting the narrowest topology hint! I appreciate the detailed explanation both in the PR description and as comments in the code which helped a lot in the review process. The PR looks good to me and I am happy to give it an LGTM but while reviewing this, I couldn't stop myself from thinking that if we maintained a list of mergedHints sorted by the corresponding |
The current logic never builds a list of |
Yeah, I was suggesting that we move to storing the merged hint (evaluated by performing the bitwise AND of a cross product entry) rather than just evaluating by iterating over them. I am happy with the PR, going to put on hold so that other reviewers can provide their input. /lgtm |
One thing I could do to maybe help write more comprehensive unit tests is to factor out the new logic into a standalone function and then test explicit inputs to that function. I struggled to write comprehensive tests because of the way the |
886a9b3
to
a226cdd
Compare
/lgtm |
For the 'single-numa' and 'restricted' TopologyManager policies, pods are only admitted if all of their containers have perfect alignment across the set of resources they are requesting. The best-effort policy, on the other hand, will prefer allocations that have perfect alignment, but fall back to a non-preferred alignment if perfect alignment can't be achieved. The existing algorithm of how to choose the best hint from the set of "non-preferred" hints is fairly naive and often results in choosing a sub-optimal hint. It works fine in cases where all resources would end up coming from a single NUMA node (even if its not the same NUMA nodes), but breaks down as soon as multiple NUMA nodes are required for the "best" alignment. We will never be able to achieve perfect alignment with these non-preferred hints, but we should try and do something more intelligent than simply choosing the hint with the narrowest mask. In an ideal world, we would have the TopologyManager return a set of "resources-relative" hints (as opposed to a common hint for all resources as is done today). Each resource-relative hint would indicate how many other resources could be aligned to it on a given NUMA node, and a hint provider would use this information to allocate its resources in the most aligned way possible. There are likely some edge cases to consider here, but such an algorithm would allow us to do partial-perfect-alignment of "some" resources, even if all resources could not be perfectly aligned. Unfortunately, supporting something like this would require a major redesign to how the TopologyManager interacts with its hint providers (as well as how those hint providers make decisions based on the hints they get back). That said, we can still do better than the naive algorithm we have today, and this patch provides a mechanism to do so. We start by looking at the set of hints passed into the TopologyManager for each resource and generate a list of the minimum number of NUMA nodes required to satisfy an allocation for a given resource. Each entry in this list then contains the 'minNUMAAffinity.Count()' for a given resources. Once we have this list, we find the *maximum* 'minNUMAAffinity.Count()' from the list and mark that as the 'bestNonPreferredAffinityCount' that we would like to have associated with whatever "bestHint" we ultimately generate. The intuition being that we would like to (at the very least) get alignment for those resources that *require* multiple NUMA nodes to satisfy their allocation. If we can't quite get there, then we should try to come as close to it as possible. Once we have this 'bestNonPreferredAffinityCount', the algorithm proceeds as follows: If the mergedHint and bestHint are both non-preferred, then try and find a hint whose affinity count is as close to (but not higher than) the bestNonPreferredAffinityCount as possible. To do this we need to consider the following cases and react accordingly: 1. bestHint.NUMANodeAffinity.Count() > bestNonPreferredAffinityCount 2. bestHint.NUMANodeAffinity.Count() == bestNonPreferredAffinityCount 3. bestHint.NUMANodeAffinity.Count() < bestNonPreferredAffinityCount For case (1), the current bestHint is larger than the bestNonPreferredAffinityCount, so updating to any narrower mergeHint is preferred over staying where we are. For case (2), the current bestHint is equal to the bestNonPreferredAffinityCount, so we would like to stick with what we have *unless* the current mergedHint is also equal to bestNonPreferredAffinityCount and it is narrower. For case (3), the current bestHint is less than bestNonPreferredAffinityCount, so we would like to creep back up to bestNonPreferredAffinityCount as close as we can. There are three cases to consider here: 3a. mergedHint.NUMANodeAffinity.Count() > bestNonPreferredAffinityCount 3b. mergedHint.NUMANodeAffinity.Count() == bestNonPreferredAffinityCount 3c. mergedHint.NUMANodeAffinity.Count() < bestNonPreferredAffinityCount For case (3a), we just want to stick with the current bestHint because choosing a new hint that is greater than bestNonPreferredAffinityCount would be counter-productive. For case (3b), we want to immediately update bestHint to the current mergedHint, making it now equal to bestNonPreferredAffinityCount. For case (3c), we know that *both* the current bestHint and the current mergedHint are less than bestNonPreferredAffinityCount, so we want to choose one that brings us back up as close to bestNonPreferredAffinityCount as possible. There are three cases to consider here: 3ca. mergedHint.NUMANodeAffinity.Count() > bestHint.NUMANodeAffinity.Count() 3cb. mergedHint.NUMANodeAffinity.Count() < bestHint.NUMANodeAffinity.Count() 3cc. mergedHint.NUMANodeAffinity.Count() == bestHint.NUMANodeAffinity.Count() For case (3ca), we want to immediately update bestHint to mergedHint because that will bring us closer to the (higher) value of bestNonPreferredAffinityCount. For case (3cb), we want to stick with the current bestHint because choosing the current mergedHint would strictly move us further away from the bestNonPreferredAffinityCount. Finally, for case (3cc), we know that the current bestHint and the current mergedHint are equal, so we simply choose the narrower of the 2. This patch implements this algorithm for the case where we must choose from a set of non-preferred hints and provides a set of unit-tests to verify its correctness. Signed-off-by: Kevin Klues <kklues@nvidia.com>
a226cdd
to
e7160eb
Compare
@swatisehgal , @fromanirh , @pacoxu |
e7160eb
to
4ac43b0
Compare
thanks Kevin, will review ASAP |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just noticed a duplicate test but other than that looks good. Please remove that and I will add an LGTM.
Signed-off-by: Kevin Klues <kklues@nvidia.com>
4ac43b0
to
e370b73
Compare
Thanks @klueska for adding comprehensive tests. Your effort is greatly appreciated! |
/test pull-kubernetes-e2e-kind-ipv6 |
// Finally, for case (3cc), we know that the current bestHint and the | ||
// candidate hint are equal, so we simply choose the narrower of the 2. | ||
|
||
// Case 1 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit, but still: the above explanation was just great, so I can't help but wonder if would have be even better to intermix it with the actual code (kinda literate-ish programming style) instead of having first pretty long (and very informative) explanation and the chunk of code afterwards.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I actually had it split across them all originally, and I found it a bit harder to follow (even as the author of it). By putting it up front you get the change to read through it all without being interrupted by the code in between. Let me see if I can come up with some middle ground that still flows well.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've been wondering myself, and I don't want to slow down this PR needlessly, so up to you!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is a great PR and it is worth merging for the great commit message and the code cleanup alone. I've added a bunch of minor comments, but they are suggestions to improve even futher rather than requests, and by no means they require a re-upload.
The only real question I have is the following.
There is the main commit message which goes great lengths in explaining the rationale for the change, and the description is indeed great. If we grok the basic premise, the rest of the code is very clear and follows very smoothly.
The devil's is in the premise itself, I mean specifically here:
We start by looking at the set of hints passed into the TopologyManager for
each resource and generate a list of the minimum number of NUMA nodes required
to satisfy an allocation for a given resource. Each entry in this list then
contains the minNUMAAffinity.Count() for a given resources. Once we have this
list, we find the maximum minNUMAAffinity.Count() from the list and mark
that as the bestNonPreferredAffinityCount'that we would like to have
associated with whatever "bestHint" we ultimately generate. The intuition being
that we would like to (at the very least) get alignment for those resources
that require multiple NUMA nodes to satisfy their allocation. If we can't
quite get there, then we should try to come as close to it as possible.
(emphasis added)
The only problem here is being on the same page about the intuition. From my own past experience, I can think of few examples indeed, but I'm not sure I'm seeing the very same picture you're referring.
An example would be great to make sure we immediately get the same intuition and to make the otherwise flawless explanation really complete.
No need to re-upload, a GH comment is fine here.
/lgtm |
Thanks for the reviews everyone. Given the only comments are minor nits, I will leave the code as is to avoid requiring one more round of reviews. I will add the example @fromanirh requested to the PR description though. Thanks again. /unhold |
Description updated. |
very helpful. Thanks! |
/test pull-kubernetes-e2e-gce-ubuntu-containerd |
What type of PR is this?
/kind cleanup
What this PR does / why we need it:
For the 'single-numa' and 'restricted'
TopologyManager
policies, pods are onlyadmitted if all of their containers have perfect alignment across the set of
resources they are requesting. The
best-effort
policy, on the other hand,will prefer allocations that have perfect alignment, but fall back to a
non-preferred alignment if perfect alignment can't be achieved.
The existing algorithm of how to choose the best hint from the set of
"non-preferred" hints is fairly naive and often results in choosing a
sub-optimal hint. It works fine in cases where all resources would end up
coming from a single NUMA node (even if its not the same NUMA nodes), but
breaks down as soon as multiple NUMA nodes are required for the "best"
alignment. We will never be able to achieve perfect alignment with these
non-preferred hints, but we should try and do something more intelligent than
simply choosing the hint with the narrowest mask.
In an ideal world, we would have the
TopologyManager
return a set of"resource-relative" hints (as opposed to a common hint for all resources as is
done today). Each resource-relative hint would indicate how many other
resources could be aligned to it on a given NUMA node, and a hint provider
would use this information to allocate its resources in the most aligned way
possible. There are likely some edge cases to consider here, but such an
algorithm would allow us to do partial-perfect-alignment of "some" resources,
even if all resources could not be perfectly aligned.
Unfortunately, supporting something like this would require a major redesign to
how the
TopologyManager
interacts with its hint providers (as well as howthose hint providers make decisions based on the hints they get back).
That said, we can still do better than the naive algorithm we have today, and
this patch provides a mechanism to do so.
We start by looking at the set of hints passed into the
TopologyManager
foreach resource and generate a list of the minimum number of NUMA nodes
required to satisfy an allocation for a given resource. In other words, each entry
in this list contains the
minNUMAAffinity.Count()
for a given resource.Once we have this list, we find the maximum
minNUMAAffinity.Count()
fromthe list and mark that as the
bestNonPreferredAffinityCount
that we would liketo have associated with whatever "bestHint" we ultimately generate. The intuition
being that we would like to (at the very least) get alignment for those resources
that require multiple NUMA nodes to satisfy their allocation. If we can't
quite get there, then we should try to come as close to it as possible.
For example, consider a machine where we have 8 NUMA nodes with 32
CPUs per NUMA node and 2 GPUs attached only to each odd numbered
NUMA node (e.g. the DGX-A100 server provided by NVIDIA).
Assuming a machine with no resources allocated yet, if a user were to request 32
CPUs (which can fit on one NUMA node) and 4 GPUs (which requires at least 2
NUMA nodes), then you would expect one of the following affinity masks to win out
as the "best" affinity mask since it encodes the alignment required for all 4 GPUs
to be allocated (even though the 32 CPUs only require a single NUMA node):
However, with the existing algorithm, none of these affinity masks will be
considered, and a naive result of
{00000010}
will be returned since that is the"narrowest" alignment that the allocation of all CPUs with some subset of GPUs
can be satisfied with. In effect, the old algorithm lets the "least" constrained
resource influence the hint generation more heavily when what we actually want
is the "more" constrained resource to influence the hint generation more heavily.
To achieve this, the new algorithm proceeds as follows once
we have calculated the
bestNonPreferredAffinityCount
as described above:If the mergedHint and bestHint are both non-preferred, then try and find a hint
whose affinity count is as close to (but not higher than) the
bestNonPreferredAffinityCount
as possible. To do this we need to consider thefollowing cases and react accordingly:
For case (1), the current bestHint is larger than the
bestNonPreferredAffinityCount
, so updating to any narrowermergeHint
ispreferred over staying where we are.
For case (2), the current
bestHint
is equal to thebestNonPreferredAffinityCount
, so we would like to stick with what we haveunless the current
mergedHint
is also equal tobestNonPreferredAffinityCount
and it is narrower.For case (3), the current
bestHint
is less thanbestNonPreferredAffinityCount
, so we would like to creep back up tobestNonPreferredAffinityCount
as close as we can. There are three cases toconsider here:
For case (3a), we just want to stick with the current
bestHint
becausechoosing a new hint that is greater than
bestNonPreferredAffinityCount
wouldbe counter-productive.
For case (3b), we want to immediately update
bestHint
to the currentmergedHint
, making it now equal tobestNonPreferredAffinityCount
.For case (3c), we know that both the current
bestHint
and the currentmergedHint
are less thanbestNonPreferredAffinityCount
, so we want tochoose one that brings us back up as close to
bestNonPreferredAffinityCount
as possible. There are three cases to consider here:
For case (3ca), we want to immediately update
bestHint
tomergedHint
because that will bring us closer to the (higher) value of
bestNonPreferredAffinityCount
.For case (3cb), we want to stick with the current
bestHint
because choosingthe current
mergedHint
would strictly move us further away from thebestNonPreferredAffinityCount
.Finally, for case (3cc), we know that the current
bestHint
and the currentmergedHint
are equal, so we simply choose the narrower of the 2.This PR implements this algorithm for the case where we must choose from a set
of non-preferred hints and provides a set of unit-tests to verify its
correctness.
Does this PR introduce a user-facing change?