-
Notifications
You must be signed in to change notification settings - Fork 40.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Heapster sizing adjustments #22940
Heapster sizing adjustments #22940
Conversation
Labelling this PR as size/M |
GCE e2e build/test passed for commit 07d5e67a5982073c36ddf5514daff6f15f3939bd. |
{% set num_nodes = pillar.get('num_nodes', -1) -%} | ||
{% if num_nodes >= 0 -%} | ||
{% set heapster_memory = (200 + num_nodes * 3)|string + "Mi" -%} | ||
{% set metrics_memory = (200 + num_nodes * 3)|string + "Mi" -%} | ||
{% set eventer_memory = (200000 + num_nodes * 200)|string + "Ki" -%} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
s/200000/200 * 1024/
07d5e67
to
798f0bd
Compare
GCE e2e build/test passed for commit 798f0bd7aeac09acfd0f1847a47638d95a63e7a1. |
please ping this thread when you think it is ready to merge |
{% set num_nodes = pillar.get('num_nodes', -1) -%} | ||
{% if num_nodes >= 0 -%} | ||
{% set heapster_memory = (200 + num_nodes * 3)|string + "Mi" -%} | ||
{% set metrics_memory = (200 + num_nodes * 4)|string + "Mi" -%} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Are we increasing the memory requirement for heapster pod as a whole?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes. Just in case.
For 1000 node cluster we are able to operate without any sink at 2-2.3gb (with 30k pause and 2k system pods). GKE http output creation consumes +400MB. That brings us to 2.75. If couple events occur at once: scraping, gke scraping and pod relist then the temporary memory consumption can be bigger. So to be on the VERY safe side I decided to go with 4 multpilier.
As we did a lot of noise (=events) during the tests it became apparent that Eventer should also have more memory. List that happens inside of watch consumes LOTS of memory (we will try to get rid of it just after 1.2).
798f0bd
to
f5f6a80
Compare
Also bumping the image to beta2. |
f5f6a80
to
6123df9
Compare
GCE e2e build/test passed for commit f5f6a80e89544ffe5fb77fa542ac699dd84b9514. |
GCE e2e build/test passed for commit 6123df9. |
LGTM |
@k8s-bot test this [submit-queue is verifying that this PR is safe to merge] |
{% set num_nodes = pillar.get('num_nodes', -1) -%} | ||
{% if num_nodes >= 0 -%} | ||
{% set heapster_memory = (200 + num_nodes * 3)|string + "Mi" -%} | ||
{% set metrics_memory = (200 + num_nodes * 4)|string + "Mi" -%} | ||
{% set eventer_memory = (200000 + num_nodes * 500)|string + "Ki" -%} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/s/200000/200*1024
GCE e2e build/test passed for commit 6123df9. |
Automatic merge from submit-queue |
Auto commit by PR queue bot
PR description still says "do not merge yet". |
Not good for release-note generation |
And please ensure all PRs have a description or at least a reference to an issue. |
Auto commit by PR queue bot (cherry picked from commit f899a40)
Commit be2ad9e found in the "release-1.2" branch appears to be this PR. Removing the "cherrypick-candidate" label. If this s an error find help to get your PR picked. |
@bgrant0607 Please merge it to 1.2. |
@mwielgus this is already merged to 1.2 branch. |
Auto commit by PR queue bot (cherry picked from commit f899a40)
Auto commit by PR queue bot (cherry picked from commit f899a40)
Do not merge yet.