New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix kubelet panic when allocate resource for pod. #119561
Fix kubelet panic when allocate resource for pod. #119561
Conversation
Hi @payall4u. Thanks for your PR. I'm waiting for a kubernetes member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
6a11677
to
06a7c0b
Compare
/ok-to-test |
I think it would be better to make a test to reproduce the bug and make sure it is fixed by this patch. |
if m.allocatedDevices[resource] == nil { | ||
m.allocatedDevices[resource] = sets.NewString() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think I'm missing something here. Which flows may possible set this to nil
?
Once allocatedDevices[resource]
is initialized, is never reset to nil explicitely and it's explicitely assigned only to the return value of podDevices.devices()
, which can never be nil.
I for myself I like updating allocation once everything completed, but I'm wondering if this approach plays nice with the overall design (see comment at line 834-845). Perhaps we should review that part, but this is a much larger endeavour.
/triage accepted @payall4u please provide release note |
LGTM label has been added. Git tree hash: 3b1107227de026e682257c25396fea5cf9165944
|
@payall4u I don't think we still need this code on lines 629-632
|
Right. |
Signed-off-by: payall4u <payall4u@qq.com>
0a8bb44
to
d6b8a66
Compare
Sorry, I need you to give it another review. @bart0sh |
ping @bart0sh |
/lgtm |
LGTM label has been added. Git tree hash: 76076e6e067890f1a34d9d2fc078de520007b351
|
@payall4u: You must be a member of the kubernetes/milestone-maintainers GitHub team to set the milestone. If you believe you should be able to issue the /milestone command, please contact your Milestone Maintainers Team and have them propose you as an additional delegate for this responsibility. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
How about fix this in 1.30 and cherry-pick to 1.29 ? |
sounds good to me |
The Kubernetes project currently lacks enough contributors to adequately respond to all PRs. This bot triages PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/remove-lifecycle stale |
/approve |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: klueska, pacoxu, payall4u The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
What type of PR is this?
/kind bug
What this PR does / why we need it:
Fix kubelet panic when allocate resource for pod.
Which issue(s) this PR fixes:
Fixes #119560
Does this PR introduce a user-facing change?