New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[1.26] kubelet: devices: skip allocation for running pods #118635 #119706
[1.26] kubelet: devices: skip allocation for running pods #118635 #119706
Conversation
/sig node |
/retest seems unrelated |
/test pull-kubernetes-e2e-gce-serial |
/triage accepted |
211504b
to
180aa30
Compare
rebased to fix a conflict |
/test pull-kubernetes-e2e-capz-windows-containerd-1-26 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/lgtm
Re-applying as the label was removed due to rebase
LGTM label has been added. Git tree hash: d19eb9ab87ddee587ebee47c21c0756e51cf41ab
|
/lgtm |
/cc @kubernetes/release-managers Release managers, can you please take a look at this cherrypick? Thank you! |
/hold cancel 1.27 woes solved - and I think we won't merge cherry-picks out of expected order anyway |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For RelEng:
/lgtm
/approve
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: ffromani, mrunalp, xmudrii The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
What type of PR is this?
/kind bug
/kind regression
What this PR does / why we need it:
Cherry-pick of #118635 to branch
release-1.26
through #119432. Cherry pick per se done usinghack/cherry_pick_pull.sh
Original description
When kubelet initializes, runs admission for pods and possibly allocate requested resources. We need to distinguish between node reboot (no containers running) versus kubelet restart (containers potentially running).
Running pods should always survive kubelet restart. This means that device allocation on admission should not be attempted, because if a container requires devices and is still running when kubelet is restarting, that container already has devices allocated and working.
Thus, we need to properly detect this scenario in the allocation step and handle it explicitely. We need to inform the devicemanager about which pods are already running.
Which issue(s) this PR fixes:
Fixes #118559
Special notes for your reviewer:
Implements the first approach proposed in the thread, so we make the devicemanager treat running pod differently.
This approach was chosen because it seems simpler to make self-contained and easier to backport.
The devicemanager already tracks (with the help of the checkpoint files) which containers got devices assigned to them, which by definition means these containers passed its admission. The missing bit is safely learning which container are already running when initializing, and for that we extend the existing
buildContainerMapFromRuntime
Does this PR introduce a user-facing change?