New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Avoid undesirable allocation when device is associated with multiple … #101893
Conversation
Thanks for your pull request. Before we can look at your pull request, you'll need to sign a Contributor License Agreement (CLA). 📝 Please follow instructions at https://git.k8s.io/community/CLA.md#the-contributor-license-agreement to sign the CLA. It may take a couple minutes for the CLA signature to be fully registered; after that, please reply here with a new comment and we'll verify. Thanks.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. |
Welcome @kikimo! |
Hi @kikimo. Thanks for your PR. I'm waiting for a kubernetes member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
signed |
/ok-to-test |
/assign @fromanirh please comment whether it's a bug or just a cleanup |
…NUMA Nodes suppose there are two devices dev1 and dev2, each has NUMA Nodes associated as below: dev1: numa1 dev2: numa1, numa2 and we request a device from numa2, currently filterByAffinity() will return [], [dev1, dev2], [] if loop of available devices produce a sequence of [dev1, dev2], that is is not desirable as what we truely expect is an allocation of dev2 from numa2.
@fromanirh @klueska any update? |
@klueska node sorting algorithm updated, PTAL |
/retest |
/retest |
/retest |
/retest |
@klueska all updated, can we merge now? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd like to spend some time to grok the new algo - today and tomorrow likely; at glance I also like the new direction much more. Added some initial comments, nothing blocking from my side. If @klueska is happy with the changes no need to wait for me, I'll catch up later.
@klueska any further comment? |
/approve |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: kikimo, klueska The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
…NUMA Nodes
suppose there are two devices dev1 and dev2, each has NUMA Nodes associated as below:
dev1: numa1
dev2: numa1, numa2
and we request a device from numa2, currently filterByAffinity() will return
[], [dev1, dev2], [] if loop of available devices produce a sequence of [dev1, dev2],
that is is not desirable as what we truely expect is an allocation of dev2 from numa2.
What type of PR is this?
/kind bug
What this PR does / why we need it:
Avoid undesirable allocation when device is associated with multiple NUMA Nodes.
Which issue(s) this PR fixes:
Fixes #
Special notes for your reviewer:
Does this PR introduce a user-facing change?
Additional documentation e.g., KEPs (Kubernetes Enhancement Proposals), usage docs, etc.: