Skip to content

cloudhealth-collector pod gets restarted due to emptydir#119

Merged
kscherme merged 1 commit intoBroadcom:mainfrom
bbilali:patch-1
Jun 6, 2024
Merged

cloudhealth-collector pod gets restarted due to emptydir#119
kscherme merged 1 commit intoBroadcom:mainfrom
bbilali:patch-1

Conversation

@bbilali
Copy link
Copy Markdown
Contributor

@bbilali bbilali commented May 17, 2024

Currently there’s no limit for the amount of memory the emptydir can consume, according to kubernetes/kubernetes#119611 this may end up crashing the node as the memory limit is not being considered when using emptydir (the emptydir can consume all the memory of the node, resulting in other processes being killed). Setting the limit to half of the allocated memory should be fine.

Currently there’s no limit for the amount of memory the emptydir can consume, according to kubernetes/kubernetes#119611 this may end up crashing the node as the memory limit is not being considered when using emptydir (the emptydir can consume all the memory of the node, resulting in other processes being killed). Setting the limit to half of the allocated memory should be fine.
@bbilali bbilali requested a review from a team as a code owner May 17, 2024 12:09
@vmwclabot
Copy link
Copy Markdown

@bbilali, you must sign our contributor license agreement before your changes are merged. Click here to sign the agreement. If you are a VMware employee, read this for further instruction.

@vmwclabot
Copy link
Copy Markdown

@bbilali, we have received your signed contributor license agreement. The review is usually completed within a week, but may take longer under certain circumstances. Another comment will be added to the pull request to notify you when the merge can proceed.

@kscherme
Copy link
Copy Markdown
Contributor

Thank you for your contribution! Our team will take a look in the next few days.

Copy link
Copy Markdown
Contributor

@gm-cht gm-cht left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good one! we never reached the scenario where this tmpfs becomes large by any standard. And all our Nodes are disk backed instead of memory..
Glad we have a fix this scenario! Thank you!

@kscherme
Copy link
Copy Markdown
Contributor

kscherme commented Jun 3, 2024

Thank you for your patience! We are in contact with the Open Source team regarding your contributor license agreement and will get back to you once it is reviewed

@vmwclabot
Copy link
Copy Markdown

@bbilali, VMware has approved your signed contributor license agreement.

@kscherme kscherme merged commit 41b452e into Broadcom:main Jun 6, 2024
@kscherme kscherme mentioned this pull request Jun 6, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Development

Successfully merging this pull request may close these issues.

5 participants